This PR uses the new `cancel` method of `TransformStream` to properly
clean up the internal `TextDecoder` used in `TextDecoderStream` if the
stream is cancelled.
Fixes #13142
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This PR reduces the time to run `queue persistence with inflight
messages` test from 60 seconds down to about 6 seconds.
The test simulates a crash to ensure that inflight messages are cleaned
up and re-queued when the new process starts. Since messages are
considered dead after 5 seconds of being queued, reopening the db within
5 seconds does not re-queue the messages on startup (they do get
re-queued after 60 seconds, which is the cleanup frequency). By waiting
for 5 seconds before reopening the db, the test ensures that the cleanup
happens quickly when the db is opened.
We can move all promise ID knowledge to deno_core, allowing us to better
experiment with promise implementation in deno_core.
`{un,}refOpPromise(promise)` is equivalent to
`{un,}refOp(promise[promiseIdSymbol])`
Remove tokio-rustls as a direct dependency of Deno and refactor
test_server to reduce code duplication.
All tcp and tls listener paths go through the same streams now, with the
exception of the simpler Hyper http-only handlers (those can be done in
a later follow-up).
Minor bugs fixed:
- gRPC server should only serve h2
- WebSocket over http/2 had a port overlap
- Restored missing eye-catchers for some servers (still missing on Hyper
ones)
Implements `WebSocket` over http/2. This requires a conformant http/2
server supporting the extended connect protocol.
Passes approximately 100 new WPT tests (mostly `?wpt_flags=h2` versions
of existing websockets APIs).
This is implemented as a fallback when http/1.1 fails, so a server that
supports both h1 and h2 WebSockets will still end up on the http/1.1
upgrade path.
The patch also cleas up the websockets handshake to split it up into
http, https+http1 and https+http2, making it a little less intertwined.
This uncovered a likely bug in the WPT test server:
https://github.com/web-platform-tests/wpt/issues/42896
This PR adds unstable `Deno.cron` API to trigger execution of cron jobs.
* State: All cron state is in memory. Cron jobs are scheduled according
to the cron schedule expression and the current time. No state is
persisted to disk.
* Time zone: Cron expressions specify time in UTC.
* Overlapping executions: not permitted. If the next scheduled execution
time occurs while the same cron job is still executing, the scheduled
execution is skipped.
* Retries: failed jobs are automatically retried until they succeed or
until retry threshold is reached. Retry policy can be optionally
specified using `options.backoffSchedule`.
Use new https://github.com/denoland/rustls-tokio-stream project instead
of tokio-rustls for direct websocket connections. This library was
written from the ground up to be more reliable and should help with
various bugs that may occur due to underlying bugs in the old library.
Believed to fix #20355, #18977, #20948
This commit updates the ext/kv module to use the denokv_* crates for
the protocol and the sqlite backend. This also fixes a couple of bugs in
the sqlite backend, and updates versionstamps to be updated less
linearly.
This brings in [`display`](https://github.com/rgbkrk/display.js) as part
of the `Deno.jupyter` namespace.
Additionally these APIs were added:
- "Deno.jupyter.md"
- "Deno.jupyter.html"
- "Deno.jupyter.svg"
- "Deno.jupyter.format"
These APIs greatly extend capabilities of rendering output in Jupyter
notebooks.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Otherwise you can not return `Deno.Server` from async functions.
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
fixes #20454
Current KV queues implementation assumes that `enqueue` and
`listenQueue` are called on the same instance of `Deno.Kv`. It's
possible that the same Deno process opens multiple KV instances pointing
to the same fs path, and in that case `listenQueue` should still get
notified of messages enqueued through a different KV instance.
This helps reduce flakes where a test starts an HTTP server and makes a
request using fetch, then shuts down the server, then starting a new
test with a new server, but the connection pool still has a "not quite
closed yet" connection to the old server, and a new request to the new
server gets sent on the closed connection, which obviously errors out.
Previously could flake on the op sanitizer because the
`await makeTempFile()` promise could leak out of the test. Now we ensure
the request is fully handled before returning.
We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.
This patch ~5% perf boost on small ReadableStream:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 148.32us 108.95us 3.88ms 95.71%
Req/Sec 33.24k 2.68k 37.94k 73.76%
668188 requests in 10.10s, 77.74MB read
Requests/sec: 66162.91
Transfer/sec: 7.70MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 150.23us 67.61us 4.39ms 94.80%
Req/Sec 31.81k 1.55k 35.56k 83.17%
639078 requests in 10.10s, 74.36MB read
Requests/sec: 63273.72
Transfer/sec: 7.36MB
```
This commit improves async op sanitizer speed by only delaying metrics
collection if there are pending ops. This
results in a speedup of around 30% for small CPU bound unit tests.
It performs this check and possible delay on every collection now,
fixing an issue with parent test leaks into steps.
This commit improves compatibility of "node:http2" module by polyfilling
"connect" method and "ClientHttp2Session" class. Basic operations like
streaming, header and trailer handling are working correctly.
Refing/unrefing is still a TODO and "npm:grpc-js/grpc" is not yet working
correctly.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Keys are expensive metadata. We track it for various purposes, e.g.
transaction conflict check, and key expiration.
This patch limits the total key size in an atomic operation to 80 KiB
(81920 bytes). This helps ensure efficiency in implementations.