This addresses issue #19918.
## Issue description
Event messages have the wrong isTrusted value when they are not
triggered by user interaction, which differs from the browser. In
particular, all MessageEvents created by Deno have isTrusted set to
false, even though it should be true.
This is my first ever contribution to Deno, so I might be missing
something.
This PR changes Web IDL interfaces to be declared with `var` instead of
`class`, so that accessing them via `globalThis` does not raise type
errors.
Closes #13390.
Fixes the WPT tests that test w/invalid codes. Also explicitly ignoring
some h2 tests to hopefully prevent flakes.
The previous changes to WebSocketStream introduced a bug where the close
errors were not made available if the `pull` method was re-entrant.
`ZeroCopyBuf` was convenient to use, but sometimes it did hide details
that some copies were necessary in certain cases. Also it made it way to easy
for the caller to pass around and convert into different values. This commit
splits `ZeroCopyBuf` into `JsBuffer` (an array buffer coming from V8) and
`ToJsBuffer` (a Rust buffer that will be converted into a V8 array buffer).
As a result some magical conversions were removed (they were never used)
limiting the API surface and preparing for changes in #19534.
Reduce the GC pressure from the websocket event method by splitting it
into an event getter and a buffer getter.
Before:
165.9k msg/sec
After:
169.9k msg/sec
This switches syscall used in HTTP and WS server from "writev"
to "sendto".
"DENO_USE_WRITEV=1" can be used to enable using "writev" syscall.
Doing this for easier testing of various setups.
Using `deopt-explorer` I found that a bunch of fields on `WebSocket`
class were polymorphic.
Fortunately it was enough to initialize them to `undefined`
to fix the problem.
No need to go through the async machinery for `send(String | Buffer)` --
we can fire and forget, and then route any send errors into the async
call we're already making (`op_ws_next_event`).
Early benchmark on MacOS:
Before: 155.8k msg/sec
After: 166.2k msg/sec (+6.6%)
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.33.4
Please ensure:
- [x] Everything looks ok in the PR
- [ ] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.33.4 && git checkout -b forward_v1.33.4 upstream/forward_v1.33.4
```
Don't need this PR? Close it.
cc @levex
Co-authored-by: levex <levex@users.noreply.github.com>
Co-authored-by: Levente Kurusa <lkurusa@kernelstuff.org>
Merges `op_http_upgrade_next` and `op_ws_server_create`, significantly
simplifying websocket construction in ext/http (next), and removing one
JS -> Rust call. Also WS server now doesn't bypass
`HttpPropertyExtractor`.
Partially supersedes #19016.
This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes
the requirement for `spawn` tasks to be `Send` given our single-threaded
executor.
While we don't need to technically do anything w/`spawn_blocking`, this
allows us to have a single `JoinHandle` type that works for both cases,
and allows us to more easily experiment with alternative
`spawn_blocking` implementations that do not require tokio (ie: rayon).
Async ops (+~35%):
Before:
```
time 1310 ms rate 763358
time 1267 ms rate 789265
time 1259 ms rate 794281
time 1266 ms rate 789889
```
After:
```
time 956 ms rate 1046025
time 954 ms rate 1048218
time 924 ms rate 1082251
time 920 ms rate 1086956
```
HTTP serve (+~4.4%):
Before:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 68.78us 19.77us 1.43ms 86.84%
Req/Sec 68.78k 5.00k 73.84k 91.58%
1381833 requests in 10.10s, 167.36MB read
Requests/sec: 136823.29
Transfer/sec: 16.57MB
```
After:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 63.12us 17.43us 1.11ms 85.13%
Req/Sec 71.82k 3.71k 77.02k 79.21%
1443195 requests in 10.10s, 174.79MB read
Requests/sec: 142921.99
Transfer/sec: 17.31MB
```
Suggested-By: alice@ryhl.io
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.33.3
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.33.3 && git checkout -b forward_v1.33.3 upstream/forward_v1.33.3
```
Don't need this PR? Close it.
cc @levex
Co-authored-by: Levente Kurusa <lkurusa@kernelstuff.org>
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.33.2
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.33.2 && git checkout -b forward_v1.33.2 upstream/forward_v1.33.2
```
Don't need this PR? Close it.
cc @levex
Co-authored-by: levex <levex@users.noreply.github.com>
Co-authored-by: Levente Kurusa <lkurusa@kernelstuff.org>
Migrates some of existing async ops to generated wrappers introduced in
https://github.com/denoland/deno/pull/18887. As a result "core.opAsync2"
was removed.
I will follow up with more PRs that migrate all the async ops to
generated wrappers.
- No need to wrap buffer in a `new DataView()`
- Deferred ops are still eagerly polled, but resolved on the next
tick of the event loop, we don't want them to be eagerly polled
- Using "core.opAsync"/"core.opAsync2" incurs additional cost
of looking up these functions on each call. Similarly with "ops.*"
---------
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
This is a rewrite of the `Deno.serve` API to live on top of hyper
1.0-rc3. The code should be more maintainable long-term, and avoids some
of the slower mpsc patterns that made the older code less efficient than
it could have been.
Missing features:
- `upgradeHttp` and `upgradeHttpRaw` (`upgradeWebSocket` is available,
however).
- Automatic compression is unavailable on responses.