Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
<!--
Before submitting a PR, please read http://deno.land/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
-->
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
This commit adds a new op_write_all to core that allows writing an
entire chunk in a single async op call. Internally this calls
`Resource::write_all`.
The `writableStreamForRid` has been moved to `06_streams.js` now, and
uses this new op. Various other code paths now also use this new op.
Closes #16227
This is the release commit being forwarded back to main for 1.26.1
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.26.1 && git checkout -b forward_v1.26.1 upstream/forward_v1.26.1
```
Don't need this PR? Close it.
cc @cjihrig
Co-authored-by: cjihrig <cjihrig@users.noreply.github.com>
This commit adds a fast path to `Request` and `Response` that
make consuming request bodies much faster when using `Body#text`,
`Body#arrayBuffer`, and `Body#blob`, if the body is a FastStream.
Because the response bodies for `fetch` are FastStream, this speeds up
consuming `fetch` response bodies significantly.
We can use Resource::read_return & op_read instead. This allows HTTP
request bodies to participate in FastStream.
To make this work, `readableStreamForRid` required a change to allow non
auto-closing resources to be handled. This required some minor changes
in our FastStream paths in ext/http and ext/flash.
Welcome to better optimised op calls! Currently opSync is called with parameters of every type and count. This most definitely makes the call megamorphic. Additionally, it seems that spread params leads to V8 not being able to optimise the calls quite as well (apparently Fast Calls cannot be used with spread params).
Monomorphising op calls should lead to some improved performance. Now that unwrapping of sync ops results is done on Rust side, this is pretty simple:
```
opSync("op_foo", param1, param2);
// -> turns to
ops.op_foo(param1, param2);
```
This means sync op calls are now just directly calling the native binding function. When V8 Fast API Calls are enabled, this will enable those to be called on the optimised path.
Monomorphising async ops likely requires using callbacks and is left as an exercise to the reader.
Relanding #12994
This commit adds support for "unhandledrejection" event.
This event will trigger event listeners registered using:
"globalThis.addEventListener("unhandledrejection")
"globalThis.onunhandledrejection"
This is done by registering a default handler using
"Deno.core.setPromiseRejectCallback" that allows to
handle rejected promises in JavaScript instead of Rust.
This commit will make it possible to polyfill
"process.on("unhandledRejection")" in the Node compat
layer.
Co-authored-by: Colin Ihrig <cjihrig@gmail.com>
This commit uses `DetachedBuffer` instead of `ZeroCopyBuf` in the ops
that back `Worker.prototype.postMessage` and
`MessagePort.prototype.postMessage`. This is done because the
serialized buffer is then copied to the destination isolate, even
though it is internal to runtime code and not used for anything else,
so detaching it and transferring it instead saves an unnecessary copy.
This commit adds support for "unhandledrejection" event.
This event will trigger event listeners registered using:
"globalThis.addEventListener("unhandledrejection")
"globalThis.onunhandledrejection"
This is done by registering a default handler using
"Deno.core.setPromiseRejectCallback" that allows to
handle rejected promises in JavaScript instead of Rust.
This commit will make it possible to polyfill
"process.on("unhandledRejection")" in the Node compat
layer.
Co-authored-by: Colin Ihrig <cjihrig@gmail.com>
This commit adds brand checking to EventTarget. It also fixes a
bug where deno would crash if an abort signal was aborted on the
global addEventListener().
- Introduced optional callback for Deno.core.serialize API, that returns
cloning error if there is one.
- Removed try/catch in seralize structured clone function and throw error from
callback.
- Removed "Object with a getter that throws" assertion from WPT.
This commit fixes a failing WPT test by making EventTarget's
addEventListener() method throw if both the listener and the
signal option are null.
Fixes: https://github.com/denoland/deno/issues/14593
This reverts commit 10e50a1207
Alternative to #13217, IMO the tradeoffs made by #10786 aren't worth it.
It breaks abstractions (crates being self-contained, deno_core without snapshotting etc...) and causes pain points / gotchas for both embedders & devs for a relatively minimal gain in incremental build time ...
Closes #11030
In the implementation of structured serialization in
`Deno.core.serialize`, whenever there is a serialization error, an
exception will be thrown with the message "Failed to serialize
response", even though V8 provides a message to use in such cases.
This change instead throws an exception with the V8-provided message,
if there is one.
This commit makes the error messages that one sees when passing
something other than a BufferSource to a (De)CompressionStream. The
WPT tests already pass, because they just check for error type
(TypeError), and not error message. A TypeError was already thrown for
invalid values via serde_v8.
This commit fixes an error when user deletes "window" global JS
variable. Instead of relying on "window" or "globalThis" to dispatch
"load" and "unload" events, we are default to global scope of the
worker.
In the `transform` function to `TextEncoderStream`'s internal
`TransformStream`, if `chunk` is the empty string and
`this.#pendingHighSurrogate` is null, then `lastCodeUnit` will be NaN.
As it turns out, this does not cause a bug because the comparison to
check for lone surrogates turns out to be false for NaN, but to rely on
it makes the code brittle.
The Web IDL conversion to `BufferSource` and similar types shouldn't
check whether the buffer is detached.
In the case of `TextDecoder`, our implementation would still throw after
the Web IDL conversions because we're creating a new `Uint8Array` from
the buffer source's buffer, which throws if it's detached. This change
also fixes this bug.
The implementation of `TextDecoder` had a bug where it was copying the
input data in every case. This change removes that copy in
non-`SharedArrayBuffer` cases.
Since passing a shared buffer source to Rust would fail, this copy of
the input data was making `TextDecoder` work in cases where the input
is shared. In order to avoid a breaking change, the copy is retained in
those cases.
Currently all async ops are polled lazily, which means that op
initialization code is postponed until control is yielded to the event
loop. This has some weird consequences, e.g.
```js
let listener = Deno.listen(...);
let conn_promise = listener.accept();
listener.close();
// `BadResource` is thrown. A reasonable error would be `Interrupted`.
let conn = await conn_promise;
```
JavaScript promises are expected to be eagerly evaluated. This patch
makes ops actually do that.
and all its subclasses including `AbortSignal` ...
Instead of storing associated data in a global `WeakMap` we store them as private attributes (via a Symbol) on the object instances
In the spec, a URL record has an associated "blob URL entry", which for
`blob:` URLs is populated during parsing to contain a reference to the
`Blob` object that backs that object URL. It is this blob URL entry that
the `fetch` API uses to resolve an object URL.
Therefore, since the `Request` constructor parses URL inputs, it will
have an associated blob URL entry which will be used when fetching, even
if the object URL has been revoked since the construction of the
`Request` object. (The `Request` constructor takes the URL as a string
and parses it, so the object URL must be live at the time it is called.)
This PR adds a new `blobFromObjectUrl` JS function (backed by a new
`op_blob_from_object_url` op) that, if the URL is a valid object URL,
returns a new `Blob` object whose parts are references to the same Rust
`BlobPart`s used by the original `Blob` object. It uses this function to
add a new `blobUrlEntry` field to inner requests, which will be `null`
or such a `Blob`, and then uses `Blob.prototype.stream()` as the
response's body. As a result of this, the `blob:` URL resolution from
`op_fetch` is now useless, and has been removed.
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...