Reduce the GC pressure from the websocket event method by splitting it
into an event getter and a buffer getter.
Before:
165.9k msg/sec
After:
169.9k msg/sec
Related issue: https://github.com/denoland/deno/issues/19358.
This is a regression that seems to have been introduced in
https://github.com/denoland/deno/pull/18905. It looks to have been a
performance optimization.
The issue is probably easiest described with some code:
```ts
const target = new EventTarget();
const event = new Event("foo");
target.addEventListener("foo", () => {
console.log('base');
target.addEventListener("foo", () => {
console.log('nested');
});
});
target.dispatchEvent(event);
```
Essentially, the second event listener is being attached while the `foo`
event is still being dispatched. It should then not fire that second
event listener, but Deno currently does.
For the first implementation of node:http2, we'll use the internal
version of `Deno.serve` which allows us to listen on a raw TCP
connection rather than a listener.
This is mostly a refactoring, and hooking up of `op_http_serve_on` that
was never previously exposed (but designed for this purpose).
`isFile`, `isDirectory`, `isSymlink` are defined in `Deno.FileInfo`, but
`isBlockDevice`, `isCharacterDevice`, `isFIFO`, `isSocket` are not
defined.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This runs our `js_unit_tests` and `node_unit_tests` in parallel, one
rust test per JS unit test file. Some of our JS tests don't like running
in parallel due to port requirements, so this also makes those use a
specific port-per-file. This does not attempt to make the node-compat
tests work.
This commit changes the implementation of `ext/web` timers, by using
"op_void_async_deferred" for timeouts of 0ms.
0ms timeout is meant to be run at the end of the event loop tick and
currently Tokio timers that we use to back timeouts have at least 1ms
resolution. That means that 0ms timeout actually take >1ms. This
commit changes that and runs 0ms timeout at the end of the event
loop tick.
One consequence is that "unrefing" a 0ms timer will actually keep
the event loop alive (which I believe actually makes sense, the test
we had only worked because the timeout took more than 1ms).
Ref https://github.com/denoland/deno/issues/19034
This commit changes the return type of an unstable `Deno.serve()` API
to instead return a `Deno.Server` object that has a `finished` field.
This change is done in preparation to be able to ref/unref the HTTP
server.
Fixes for various `Attemped to access invalid request` bugs (#19058,
#15427, #17213).
We did not wait for both a drop event and a completion event before
removing items from the slab table. This ensures that we do so.
In addition, the slab methods are refactored out into `slab.rs` for
maintainability.
Currently the `multipart/form-data` parser in
`Request.prototype.formData` and `Response.prototype.formData` decodes
non-ASCII filenames incorrectly, as if they were encoded in Latin-1
rather than UTF-8. This happens because the header section of each
`multipart/form-data` entry is decoded as Latin-1 in order to be parsed
with `Headers`, which only allows `ByteString`s, but the names and
filenames are never decoded correctly. This PR fixes this as a
post-processing step.
Note that the `multipart/form-data` parsing for this APIs in the Fetch
spec is very much underspecified, and it does not specify that names and
filenames must be decoded as UTF-8. However, it does require that the
bodies of non-`File` entries are decoded as UTF-8, and in browsers,
names and filenames always use the same encoding as the body.
Closes #19142.
`Content-Encoding: gzip` support for `Deno.serve`. This doesn't support
Brotli (`br`) yet, however it should not be difficult to add. Heuristics
for compression are modelled after those in `Deno.serveHttp`.
Tests are provided to ensure that the gzip compression is correct. We
chunk a number of different streams (zeros, hard-to-compress data,
already-gzipped data) in a number of different ways (regular, random,
large/small, small/large).
Fixes #16699 and #18960 by ensuring that we release our HTTP
`spawn_local` tasks when the HTTP resource is dropped.
Because our cancel handle was being projected from the resource via
`RcMap`, the resource was never `Drop`ped. By splitting the handle out
into its own `Rc`, we can avoid keeping the resource alive and let it
drop to cancel everything.
`new Deno.KvU64(1n) + 2n == 3n` is now true.
`new Deno.KvU64(1n)` is now inspected as `[Deno.KvU64: 1n]`
(`Object(1n)` is inspected as `[BigInt: 1n]`).
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This implements HTTP/2 prior-knowledge connections, allowing clients to
request HTTP/2 over plaintext or TLS-without-ALPN connections. If a
client requests a specific protocol via ALPN (`h2` or `http/1.1`),
however, the protocol is forced and must be used.
This is a rewrite of the `Deno.serve` API to live on top of hyper
1.0-rc3. The code should be more maintainable long-term, and avoids some
of the slower mpsc patterns that made the older code less efficient than
it could have been.
Missing features:
- `upgradeHttp` and `upgradeHttpRaw` (`upgradeWebSocket` is available,
however).
- Automatic compression is unavailable on responses.
Fixes https://github.com/denoland/deno/issues/18700
Timeline of the events that lead to the bug.
1. WebSocket handshake complete
2. Server on `read_frame` holding an AsyncRefCell borrow of the
WebSocket stream.
3. Client sends a TXT frame after a some time
4. Server recieves the frame and goes back to `read_frame`.
5. After some time, Server starts a `write_frame` but `read_frame` is
still holding a borrow!
^--- Locked. read_frame needs to complete so we can resume the write.
This commit changes all writes to directly borrow the
`fastwebsocket::WebSocket` resource under the assumption that it won't
affect ongoing reads.