This commit splits "<ext_name>::init" functions into "init_ops" and
"init_ops_and_esm". That way we don't have to construct list of
ESM sources on each startup if we're running with a snapshot.
In a follow up commit "deno_core" will be changed to not have a split
between "extensions" and "extensions_with_js" - it will be embedders'
responsibility to pass appropriately configured extensions.
Prerequisite for https://github.com/denoland/deno/pull/18080
There's no point for this API to expect result. If something fails it should
result in a panic during build time to signal to embedder that setup is
wrong.
This PR refactors all internal js files (except core) to be written as
ES modules.
`__bootstrap`has been mostly replaced with static imports in form in
`internal:[path to file from repo root]`.
To specify if files are ESM, an `esm` method has been added to
`Extension`, similar to the `js` method.
A new ModuleLoader called `InternalModuleLoader` has been added to
enable the loading of internal specifiers, which is used in all
situations except when a snapshot is only loaded, and not a new one is
created from it.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Right now an error in a request body stream causes an uncatchable
global promise rejection. This PR fixes this to instead propagate the
error correctly into the promise returned from `fetch`.
It additionally fixes errored readable stream bodies being treated as
successfully completed bodies by Rust.
https://fetch.spec.whatwg.org/#http-network-or-cache-fetch
> If httpRequest’s header list contains `Range`, then append
(`Accept-Encoding`, `identity`)
> to httpRequest’s header list.
>
> This avoids a failure when handling content codings with a part of an
encoded response.
> Additionally, many servers mistakenly ignore `Range` headers if a
non-identity encoding is accepted.
This commit introduces two new buffer wrapper types to `deno_core`. The
main benefit of these new wrappers is that they can wrap a number of
different underlying buffer types. This allows for a more flexible read
and write API on resources that will require less copying of data
between different buffer representations.
- `BufView` is a read-only view onto a buffer. It can be backed by
`ZeroCopyBuf`, `Vec<u8>`, and `bytes::Bytes`.
- `BufViewMut` is a read-write view onto a buffer. It can be cheaply
converted into a `BufView`. It can be backed by `ZeroCopyBuf` or
`Vec<u8>`.
Both new buffer views have a cursor. This means that the start point of
the view can be constrained to write / read from just a slice of the
view. Only the start point of the slice can be adjusted. The end point
is fixed. To adjust the end point, the underlying buffer needs to be
truncated.
Readable resources have been changed to better cater to resources that
do not support BYOB reads. The basic `read` method now returns a
`BufView` instead of taking a `ZeroCopyBuf` to fill. This allows the
operation to return buffers that the resource has already allocated,
instead of forcing the caller to allocate the buffer. BYOB reads are
still very useful for resources that support them, so a new `read_byob`
method has been added that takes a `BufViewMut` to fill. `op_read`
attempts to use `read_byob` if the resource supports it, which falls
back to `read` and performs an additional copy if it does not. For
Rust->JS reads this change should have no impact, but for Rust->Rust
reads, this allows the caller to avoid an additional copy in many
scenarios. This combined with the support for `BufView` to be backed by
`bytes::Bytes` allows us to avoid one data copy when piping from a
`fetch` response into an `ext/http` response.
Writable resources have been changed to take a `BufView` instead of a
`ZeroCopyBuf` as an argument. This allows for less copying of data in
certain scenarios, as described above. Additionally a new
`Resource::write_all` method has been added that takes a `BufView` and
continually attempts to write the resource until the entire buffer has
been written. Certain resources like files can override this method to
provide a more efficient `write_all` implementation.
This commit adds a fast path to `Request` and `Response` that
make consuming request bodies much faster when using `Body#text`,
`Body#arrayBuffer`, and `Body#blob`, if the body is a FastStream.
Because the response bodies for `fetch` are FastStream, this speeds up
consuming `fetch` response bodies significantly.
Previously if a user specified a content-length header for an POST
request without a body, the request would contain two `content-length`
headers. One added by us, and one added by the user.
This commit ignores all content-length headers coming from the user,
because we need to have the sole authority on the content-length because
we transmit the body.
deno_fetch::init has a lot of parameters and generic on two types
that keeps expanding over time. This refactor adds deno_fetch::Options
struct for more clearly defining the various parameters.
This allows resources to be "streams" by implementing read/write/shutdown. These streams are implicit since their nature (read/write/duplex) isn't known until called, but we could easily add another method to explicitly tag resources as streams.
`op_read/op_write/op_shutdown` are now builtin ops provided by `deno_core`
Note: this current implementation is simple & straightforward but it results in an additional alloc per read/write call
Closes #12556
These are confusing. They say they are "for users that don't care about
permissions", but that isn't correct. `NoTimersPermissions` disables
permissions instead of enabling them.
I would argue that implementors should decide what permissions they want
themselves, and not take our opinionated permissions struct.
This adds support for using in memory CA certificates for
`Deno.startTLS`, `Deno.connectTLS` and `Deno.createHttpClient`.
`certFile` is deprecated in `startTls` and `connectTls`, and removed
from `Deno.createHttpClient`.
In the spec, a URL record has an associated "blob URL entry", which for
`blob:` URLs is populated during parsing to contain a reference to the
`Blob` object that backs that object URL. It is this blob URL entry that
the `fetch` API uses to resolve an object URL.
Therefore, since the `Request` constructor parses URL inputs, it will
have an associated blob URL entry which will be used when fetching, even
if the object URL has been revoked since the construction of the
`Request` object. (The `Request` constructor takes the URL as a string
and parses it, so the object URL must be live at the time it is called.)
This PR adds a new `blobFromObjectUrl` JS function (backed by a new
`op_blob_from_object_url` op) that, if the URL is a valid object URL,
returns a new `Blob` object whose parts are references to the same Rust
`BlobPart`s used by the original `Blob` object. It uses this function to
add a new `blobUrlEntry` field to inner requests, which will be `null`
or such a `Blob`, and then uses `Blob.prototype.stream()` as the
response's body. As a result of this, the `blob:` URL resolution from
`op_fetch` is now useless, and has been removed.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...