Previously the asynchronous read of the blob would not block sends that
are started later. We now do this, but in such a way as to not regress
performance in the common case of not using `Blob`.
Fixes https://github.com/denoland/deno/issues/21840
The problem was hard to reproduce as its a race condition. I've added a
test that reproduces the problem 1/10 tries. We should move the
idleTimeout handling to Rust (maybe even built into fastwebsocket).
When we migrate to op-import-per-extension, we will want to ensure that
ops have one and only one place where they are imported. This tackles
the ops that are imported via `ensureFastOps`, but does not yet tackle
direct `ops` imports.
Landing ahead of https://github.com/denoland/deno_core/pull/393
This commit refactors how we access "core", "internals" and
"primordials" objects coming from `deno_core`, in our internal JavaScript code.
Instead of capturing them from "globalThis.__bootstrap" namespace, we
import them from recently added "ext:core/mod.js" file.
This addresses issue #19918.
## Issue description
Event messages have the wrong isTrusted value when they are not
triggered by user interaction, which differs from the browser. In
particular, all MessageEvents created by Deno have isTrusted set to
false, even though it should be true.
This is my first ever contribution to Deno, so I might be missing
something.
Fixes the WPT tests that test w/invalid codes. Also explicitly ignoring
some h2 tests to hopefully prevent flakes.
The previous changes to WebSocketStream introduced a bug where the close
errors were not made available if the `pull` method was re-entrant.
Reduce the GC pressure from the websocket event method by splitting it
into an event getter and a buffer getter.
Before:
165.9k msg/sec
After:
169.9k msg/sec
Using `deopt-explorer` I found that a bunch of fields on `WebSocket`
class were polymorphic.
Fortunately it was enough to initialize them to `undefined`
to fix the problem.
No need to go through the async machinery for `send(String | Buffer)` --
we can fire and forget, and then route any send errors into the async
call we're already making (`op_ws_next_event`).
Early benchmark on MacOS:
Before: 155.8k msg/sec
After: 166.2k msg/sec (+6.6%)
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Migrates some of existing async ops to generated wrappers introduced in
https://github.com/denoland/deno/pull/18887. As a result "core.opAsync2"
was removed.
I will follow up with more PRs that migrate all the async ops to
generated wrappers.
- No need to wrap buffer in a `new DataView()`
- Deferred ops are still eagerly polled, but resolved on the next
tick of the event loop, we don't want them to be eagerly polled
- Using "core.opAsync"/"core.opAsync2" incurs additional cost
of looking up these functions on each call. Similarly with "ops.*"
---------
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
This should produce a little less garbage and using an object here
wasn't really required.
---------
Co-authored-by: Aapo Alasuutari <aapo.alasuutari@gmail.com>
Co-authored-by: Leo Kettmeir <crowlkats@toaxl.com>
This commit adds a new core API `opAsync2` to call an async op with
atmost 2 arguments. Spread argument iterators has a pretty big perf hit
when calling ops.
| name | avg msg/sec/core |
| --- | --- |
| 1.32.1 | `127820.750000` |
| #18506 | `140079.000000` |
| #18506 + #18509 | `150104.250000` |
| #18506 + #18509 + this | `157340.000000` |
Use u16 to represent the kind of event (0 - 6) & event code > 6 is
treated as the close code. This way we can represent all events + the
close code in a single JS number. This is safe because (as per RFC 6455)
close code from 0-999 are reserved & not used.
| name | avg msg/sec/core |
| --- | --- |
| deno_main | `127820.750000` |
| deno #18506 | `140079.000000` |
| deno #18506 + this | `150104.250000` |
This commit renames "deno_core::InternalModuleLoader" to
"ExtModuleLoader" and changes the specifiers used by the
modules loaded from this loader to "ext:".
"internal:" scheme was really ambiguous and it's more characters than
"ext:", which should result in slightly smaller snapshot size.
Closes https://github.com/denoland/deno/issues/18020
Fixes https://github.com/denoland/deno/issues/17761
Tugstenite already sends a pong for a recieved ping. This automatically
happens when the socket read is being driven. From
https://github.com/snapview/tokio-tungstenite/issues/88
> You need to read from the read-side of the socket so that it
receives/handles pings, and on the next write it would then send the
corresponding pong.
Here's the source:
e1033afd95/src/protocol/mod.rs (L374-L380)
```rust
// Upon receipt of a Ping frame, an endpoint MUST send a Pong frame in
// response, unless it already received a Close frame. It SHOULD
// respond with Pong frame as soon as is practical. (RFC 6455)
if let Some(pong) = self.pong.take() {
trace!("Sending pong reply");
self.send_one_frame(stream, pong)?;
}
```
WIth this patch, all Autobahn tests from 1-8 pass. Fixed cases: 2.1,
2.2, 2.3, 2.4, 2.6, 2.9, 2.10, 2.11, 5.6, 5.7, 5.8, 5.19, 5.20
To run the test yourself, follow
https://www.notion.so/denolandinc/Autobahn-WebSocket-testsuite-723a86f450ce4823b4ef9cb3dc4c7869?pvs=4
This PR refactors all internal js files (except core) to be written as
ES modules.
`__bootstrap`has been mostly replaced with static imports in form in
`internal:[path to file from repo root]`.
To specify if files are ESM, an `esm` method has been added to
`Extension`, similar to the `js` method.
A new ModuleLoader called `InternalModuleLoader` has been added to
enable the loading of internal specifiers, which is used in all
situations except when a snapshot is only loaded, and not a new one is
created from it.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>