We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.
This patch ~5% perf boost on small ReadableStream:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 148.32us 108.95us 3.88ms 95.71%
Req/Sec 33.24k 2.68k 37.94k 73.76%
668188 requests in 10.10s, 77.74MB read
Requests/sec: 66162.91
Transfer/sec: 7.70MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 150.23us 67.61us 4.39ms 94.80%
Req/Sec 31.81k 1.55k 35.56k 83.17%
639078 requests in 10.10s, 74.36MB read
Requests/sec: 63273.72
Transfer/sec: 7.36MB
```
Fixes: #20569 by introducing a custom replacement for the tokio mpsc
channel that is byte-size backpressure-aware.
Using the testcase in the linked bug, we see all the small writes
aggregated into a single packet and HTTP frame.
```
10:39 $ nc localhost 8000
GET / HTTP/1.1
HTTP/1.1 200 OK
content-type: text/plain
vary: Accept-Encoding
transfer-encoding: chunked
date: Tue, 19 Sep 2023 16:39:13 GMT
A
0
1
2
3
4
```
This patch:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 157.47us 194.89us 9.53ms 98.97%
Req/Sec 31.37k 1.56k 34.73k 85.15%
630407 requests in 10.10s, 73.35MB read
Requests/sec: 62428.12
Transfer/sec: 7.26MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 343.75us 200.48us 10.41ms 98.25%
Req/Sec 14.64k 806.52 16.98k 84.65%
294018 requests in 10.10s, 39.82MB read
Requests/sec: 29109.91
Transfer/sec: 3.94MB
```
---------
Co-authored-by: Bert Belder <bertbelder@gmail.com>
This commit improves async op sanitizer speed by only delaying metrics
collection if there are pending ops. This
results in a speedup of around 30% for small CPU bound unit tests.
It performs this check and possible delay on every collection now,
fixing an issue with parent test leaks into steps.
This commit adds "deno jupyter" subcommand which
provides a Deno kernel for Jupyter notebooks.
The implementation is mostly based on Deno's REPL and
reuses large parts of it (though there's some clean up that
needs to happen in follow up PRs). Not all functionality of
Jupyter kernel is implemented and some message type
are still not implemented (eg. "inspect_request") but
the kernel is fully working and provides all the capatibilities
that the Deno REPL has; including TypeScript transpilation
and npm packages support.
Closes https://github.com/denoland/deno/issues/13016
---------
Co-authored-by: Adam Powers <apowers@ato.ms>
Co-authored-by: Kyle Kelley <rgbkrk@gmail.com>
This commit improves compatibility of "node:http2" module by polyfilling
"connect" method and "ClientHttp2Session" class. Basic operations like
streaming, header and trailer handling are working correctly.
Refing/unrefing is still a TODO and "npm:grpc-js/grpc" is not yet working
correctly.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Fixes https://github.com/denoland/deno/issues/19816
In that issue, I suggest switching over the other brotli functionality
to the Rust API provided by the `brotli` crate. Here, I only do that
with the `brotli_decompress` function to fix the bug with buffers longer
than 4096 bytes.
Keys are expensive metadata. We track it for various purposes, e.g.
transaction conflict check, and key expiration.
This patch limits the total key size in an atomic operation to 80 KiB
(81920 bytes). This helps ensure efficiency in implementations.
When trying to run
```
deno run -A --unstable npm:astro dev
```
in my Astro project it fails with:
```
Node.js v18.12.1 is not supported by Astro!
Please upgrade Node.js to a supported version: ">=18.14.1"
```
My current version is:
```
~ ❯ node --version
v20.5.1
```
Bumping the version to the latest stable Release of node in
`ext/node/polyfills/_process/process.ts` fixes this.
I don't know if this causes any conflicts, so please feel free to
correct me here.
Rewrites 3 ops that used "op(deferred)" to use "op2(async(lazy))"
instead.
This will allow us to remove codepath for handling "deferred" ops in
"deno_core".
The fix for #20188 was not entirely correct -- we were unlocking the
global buffer incorrectly. This PR introduces a lock state that ensures
we only unlock a lock we have taken out.
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.
The goal of this PR is to address issue #20106 where a `TypeError`
occurs when the variables `uid` and `gid` from `userInfo()` in `node:os`
are reassigned if the user is on Windows. Both `uid` and `gid` are
marked as `const` therefore producing a `TypeError` when the two are
reassigned.
This PR achieves that goal by marking `uid` and `gid` as `let`
To fix bugs around detection of when node emulation is required, we will
just eagerly initialize it. The improvements we make to reduce the
impact of the startup time:
- [x] Process stdin/stdout/stderr are lazily created
- [x] node.js global proxy no longer allocates on each access check
- [x] Process checks for `beforeExit` listeners before doing expensive
shutdown work
- [x] Process should avoid adding global event handlers until listeners
are added
Benchmarking this PR (`89de7e1ff`) vs main (`41cad2179`)
```
12:36 $ third_party/prebuilt/mac/hyperfine --warmup 100 -S none './deno-41cad2179 run ./empty.js' './deno-89de7e1ff run ./empty.js'
Benchmark 1: ./deno-41cad2179 run ./empty.js
Time (mean ± σ): 24.3 ms ± 1.6 ms [User: 16.2 ms, System: 6.0 ms]
Range (min … max): 21.1 ms … 29.1 ms 115 runs
Benchmark 2: ./deno-89de7e1ff run ./empty.js
Time (mean ± σ): 24.0 ms ± 1.4 ms [User: 16.3 ms, System: 5.6 ms]
Range (min … max): 21.3 ms … 28.6 ms 126 runs
```
Fixes https://github.com/denoland/deno/issues/20142
Fixes https://github.com/denoland/deno/issues/15826
Fixes https://github.com/denoland/deno/issues/20028
The goal of this PR is to address issue #19520 where Deno panics when
encountering an invalid SSL certificate.
This PR achieves that goal by removing an `.expect()` statement and
implementing a match statement on `tsl_config` (found in
[/ext/net/ops_tsl.rs](e071382768/ext/net/ops_tls.rs (L1058)))
to check whether the desired configuration is valid
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
The original implementation of `Cache` used a custom `shutdown` method
on the resource, but to simplify fast streams work we're going to move
this to an op of its own.
While we're in here, we're going to replace `opAsync` with
`ensureFastOps`. `op2` work will have to wait because of some
limitations to our async support, however.
Rename some of the helper methods on the Fs trait to be suffixed with
`_sync` / `_async`, in preparation of the introduction of more async
methods for some helpers.
Also adds a `read_text_file_async` helper to complement the renamed
`read_text_file_sync` helper.
This PR ensures that the original signal event is fired before any
dependent signal events.
---
The enabled tests fail on `main`:
```
assert_array_equals: Abort events fired in correct order expected property 0 to be
"original-aborted" but got "clone-aborted" (expected array ["original-aborted", "clone-aborted"]
got ["clone-aborted", "original-aborted"])
```
Closes #19399 (running without snapshots at all was suggested as an
alternative solution).
Adds a `__runtime_js_sources` pseudo-private feature to load extension
JS sources at runtime for faster development, instead of building and
loading snapshots or embedding sources in the binary. Will only work in
a development environment obviously.
Try running `cargo test --features __runtime_js_sources
integration::node_unit_tests::os_test`. Then break some behaviour in
`ext/node/polyfills/os.ts` e.g. make `function cpus() {}` return an
empty array, and run it again. Fix and then run again. No more build
time in between.
This bumps `async-compression` dependency in `deno_http` to latest, in
order to avoid having multiple duplicate versions.
Related, it also unpin a stale `flate2` dependency so that the whole
chain of `async-compression` -> `flate2` -> `miniz_oxide` can surface up
to current versions.
The lockfile entries for all of the above crates have been update
accordingly; the new tree of dependencies looks like this:
```
$ cargo tree -i -p miniz_oxide
miniz_oxide v0.7.1
└── flate2 v1.0.26
└── async-compression v0.4.1
```
This tweaks the HTTP response-writer in order to align the two possible
execution flows into using the same gzip default compression level, that
is `1` (otherwise the implicit default level is `6`).
This PR fixes some crashing WPT tests due to an unresolved promise.
---
This could be a [stream spec](https://streams.spec.whatwg.org) bug
When `controller.close` is called on a byob stream, there's no cleanup
of pending `readIntoRequests`. The only cleanup of pending
`readIntoRequests` happen when `.byobRequest.respond(0)` is called, it
happens
here:6ba245fe25/ext/web/06_streams.js (L2026)
which ends up calling `readIntoRequest.closeSteps(chunk);` in
6ba245fe25/ext/web/06_streams.js (L2070)
To reproduce:
```js
async function byobRead() {
const input = [new Uint8Array([8, 241, 48, 123, 151])];
const stream = new ReadableStream({
type: "bytes",
async pull(controller) {
if(input.length === 0) {
controller.close();
// controller.byobRequest.respond(0); // uncomment for fix
return
}
controller.enqueue(input.shift())
},
});
const reader = stream.getReader({ mode: 'byob' });
const r1 = await reader.read(new Uint8Array(64));
console.log(r1);
const r2 = await reader.read(new Uint8Array(64));
console.log(r2);
}
await byobRead();
```
Running the script triggers:
```
error: Top-level await promise never resolved
```
This is a prerequisite for fast streams work -- this particular resource
used a custom `mpsc`-style stream, and this work will allow us to unify
it with the streams in `ext/http` in time.
Instead of using Option as an internal semaphore for "correctly
completed EOF", we allow code to propagate errors into the channel which
can be picked up by downstream sinks like Hyper. EOF is signalled using
a more standard sender drop.
This commit adds new "--deny-*" permission flags. These are complimentary to
"--allow-*" flags.
These flags can be used to restrict access to certain resources, even if they
were granted using "--allow-*" flags or the "--allow-all" ("-A") flag.
Eg. specifying "--allow-read --deny-read" will result in a permission error,
while "--allow-read --deny-read=/etc" will allow read access to all FS but the
"/etc" directory.
Runtime permissions APIs ("Deno.permissions") were adjusted as well, mainly
by adding, a new "PermissionStatus.partial" field. This field denotes that
while permission might be granted to requested resource, it's only partial (ie.
a "--deny-*" flag was specified that excludes some of the requested resources).
Eg. specifying "--allow-read=foo/ --deny-read=foo/bar" and then querying for
permissions like "Deno.permissions.query({ name: "read", path: "foo/" })"
will return "PermissionStatus { state: "granted", onchange: null, partial: true }",
denoting that some of the subpaths don't have read access.
Closes #18804.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
This commit provides basic polyfill for "node:test" module. Currently
only top-level "test" function is polyfilled, all remaining functions from
that module throw not implemented errors.
This addresses issue #19918.
## Issue description
Event messages have the wrong isTrusted value when they are not
triggered by user interaction, which differs from the browser. In
particular, all MessageEvents created by Deno have isTrusted set to
false, even though it should be true.
This is my first ever contribution to Deno, so I might be missing
something.
Includes a lightly-modified version of hyper-util's `TokioIo` utility.
Hyper changes:
v1.0.0-rc.4 (2023-07-10)
Bug Fixes
http1:
http1 server graceful shutdown fix (#3261)
([f4b51300](f4b513009d))
send error on Incoming body when connection errors (#3256)
([52f19259](52f192593f),
closes https://github.com/hyperium/hyper/issues/3253)
properly end chunked bodies when it was known to be empty (#3254)
([fec64cf0](fec64cf0ab),
closes https://github.com/hyperium/hyper/issues/3252)
Features
client: Make clients able to use non-Send executor (#3184)
([d977f209](d977f209bc),
closes https://github.com/hyperium/hyper/issues/3017)
rt:
replace IO traits with hyper::rt ones (#3230)
([f9f65b7a](f9f65b7aa6),
closes https://github.com/hyperium/hyper/issues/3110)
add downcast on Sleep trait (#3125)
([d92d3917](d92d3917d9),
closes https://github.com/hyperium/hyper/issues/3027)
service: change Service::call to take &self (#3223)
([d894439e](d894439e00),
closes https://github.com/hyperium/hyper/issues/3040)
Breaking Changes
Any IO transport type provided must not implement hyper::rt::{Read,
Write} instead of tokio::io traits. You can grab a helper type from
hyper-util to wrap Tokio types, or implement the traits yourself, if
it's a custom type.
([f9f65b7a](f9f65b7aa6))
client::conn::http2 types now use another generic for an Executor. Code
that names Connection needs to include the additional generic parameter.
([d977f209](d977f209bc))
The Service::call function no longer takes a mutable reference to self.
The FnMut trait bound on the service::util::service_fn function and the
trait bound on the impl for the ServiceFn struct were changed from FnMut
to Fn.
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.
V8 doesn't like having internal slots on the "real" globalThis object.
This commit works around this limitation by storing the inner globalThis
objects for segregated globals in a context slot.