This PR adds unstable `Deno.cron` API to trigger execution of cron jobs.
* State: All cron state is in memory. Cron jobs are scheduled according
to the cron schedule expression and the current time. No state is
persisted to disk.
* Time zone: Cron expressions specify time in UTC.
* Overlapping executions: not permitted. If the next scheduled execution
time occurs while the same cron job is still executing, the scheduled
execution is skipped.
* Retries: failed jobs are automatically retried until they succeed or
until retry threshold is reached. Retry policy can be optionally
specified using `options.backoffSchedule`.
Use new https://github.com/denoland/rustls-tokio-stream project instead
of tokio-rustls for direct websocket connections. This library was
written from the ground up to be more reliable and should help with
various bugs that may occur due to underlying bugs in the old library.
Believed to fix #20355, #18977, #20948
This commit updates the ext/kv module to use the denokv_* crates for
the protocol and the sqlite backend. This also fixes a couple of bugs in
the sqlite backend, and updates versionstamps to be updated less
linearly.
This brings in [`display`](https://github.com/rgbkrk/display.js) as part
of the `Deno.jupyter` namespace.
Additionally these APIs were added:
- "Deno.jupyter.md"
- "Deno.jupyter.html"
- "Deno.jupyter.svg"
- "Deno.jupyter.format"
These APIs greatly extend capabilities of rendering output in Jupyter
notebooks.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Otherwise you can not return `Deno.Server` from async functions.
Co-authored-by: Yoshiya Hinosawa <stibium121@gmail.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
fixes #20454
Current KV queues implementation assumes that `enqueue` and
`listenQueue` are called on the same instance of `Deno.Kv`. It's
possible that the same Deno process opens multiple KV instances pointing
to the same fs path, and in that case `listenQueue` should still get
notified of messages enqueued through a different KV instance.
This helps reduce flakes where a test starts an HTTP server and makes a
request using fetch, then shuts down the server, then starting a new
test with a new server, but the connection pool still has a "not quite
closed yet" connection to the old server, and a new request to the new
server gets sent on the closed connection, which obviously errors out.
Previously could flake on the op sanitizer because the
`await makeTempFile()` promise could leak out of the test. Now we ensure
the request is fully handled before returning.
We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.
This patch ~5% perf boost on small ReadableStream:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 148.32us 108.95us 3.88ms 95.71%
Req/Sec 33.24k 2.68k 37.94k 73.76%
668188 requests in 10.10s, 77.74MB read
Requests/sec: 66162.91
Transfer/sec: 7.70MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 150.23us 67.61us 4.39ms 94.80%
Req/Sec 31.81k 1.55k 35.56k 83.17%
639078 requests in 10.10s, 74.36MB read
Requests/sec: 63273.72
Transfer/sec: 7.36MB
```
This commit improves async op sanitizer speed by only delaying metrics
collection if there are pending ops. This
results in a speedup of around 30% for small CPU bound unit tests.
It performs this check and possible delay on every collection now,
fixing an issue with parent test leaks into steps.
This commit improves compatibility of "node:http2" module by polyfilling
"connect" method and "ClientHttp2Session" class. Basic operations like
streaming, header and trailer handling are working correctly.
Refing/unrefing is still a TODO and "npm:grpc-js/grpc" is not yet working
correctly.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Keys are expensive metadata. We track it for various purposes, e.g.
transaction conflict check, and key expiration.
This patch limits the total key size in an atomic operation to 80 KiB
(81920 bytes). This helps ensure efficiency in implementations.
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.