This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.
We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.
In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.
Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
This bumps `async-compression` dependency in `deno_http` to latest, in
order to avoid having multiple duplicate versions.
Related, it also unpin a stale `flate2` dependency so that the whole
chain of `async-compression` -> `flate2` -> `miniz_oxide` can surface up
to current versions.
The lockfile entries for all of the above crates have been update
accordingly; the new tree of dependencies looks like this:
```
$ cargo tree -i -p miniz_oxide
miniz_oxide v0.7.1
└── flate2 v1.0.26
└── async-compression v0.4.1
```
This tweaks the HTTP response-writer in order to align the two possible
execution flows into using the same gzip default compression level, that
is `1` (otherwise the implicit default level is `6`).
Includes a lightly-modified version of hyper-util's `TokioIo` utility.
Hyper changes:
v1.0.0-rc.4 (2023-07-10)
Bug Fixes
http1:
http1 server graceful shutdown fix (#3261)
([f4b51300](f4b513009d))
send error on Incoming body when connection errors (#3256)
([52f19259](52f192593f),
closes https://github.com/hyperium/hyper/issues/3253)
properly end chunked bodies when it was known to be empty (#3254)
([fec64cf0](fec64cf0ab),
closes https://github.com/hyperium/hyper/issues/3252)
Features
client: Make clients able to use non-Send executor (#3184)
([d977f209](d977f209bc),
closes https://github.com/hyperium/hyper/issues/3017)
rt:
replace IO traits with hyper::rt ones (#3230)
([f9f65b7a](f9f65b7aa6),
closes https://github.com/hyperium/hyper/issues/3110)
add downcast on Sleep trait (#3125)
([d92d3917](d92d3917d9),
closes https://github.com/hyperium/hyper/issues/3027)
service: change Service::call to take &self (#3223)
([d894439e](d894439e00),
closes https://github.com/hyperium/hyper/issues/3040)
Breaking Changes
Any IO transport type provided must not implement hyper::rt::{Read,
Write} instead of tokio::io traits. You can grab a helper type from
hyper-util to wrap Tokio types, or implement the traits yourself, if
it's a custom type.
([f9f65b7a](f9f65b7aa6))
client::conn::http2 types now use another generic for an Executor. Code
that names Connection needs to include the additional generic parameter.
([d977f209](d977f209bc))
The Service::call function no longer takes a mutable reference to self.
The FnMut trait bound on the service::util::service_fn function and the
trait bound on the impl for the ServiceFn struct were changed from FnMut
to Fn.
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.
Benchmarking shows numbers are pretty close, however this is recommended
for the best possible thread-local performance and may improve in future
Rust compiler revisions.
Fixes #19737 by adding brotli compression parameters.
Time after:
`Accept-Encoding: gzip`:
```
real 0m0.214s
user 0m0.005s
sys 0m0.013s
```
`Accept-Encoding: br`:
Before:
```
real 0m10.303s
user 0m0.005s
sys 0m0.010s
```
After:
```
real 0m0.127s
user 0m0.006s
sys 0m0.014s
```
This commit stabilizes "Deno.serve()", which becomes the
preferred way to create HTTP servers in Deno.
Documentation was adjusted for each overload of "Deno.serve()"
API and the API always binds to "127.0.0.1:8000" by default.
Fixes #19687 by adding a rejection handler to the write inside the
setTimeout. There is a small window where the promise is actually not
awaited and may reject without a handler.
This is a new op system that will eventually replace `#[op]`.
Features
- More maintainable, generally less-coupled code
- More modern Rust proc-macro libraries
- Enforces correct `fast` labelling for fast ops, allowing for visual
scanning of fast ops
- Explicit marking of `#[string]`, `#[serde]` and `#[smi]` parameters.
This first version of op2 supports integer and Option<integer>
parameters only, and allows us to start working on converting ops and
adding features.
`ZeroCopyBuf` was convenient to use, but sometimes it did hide details
that some copies were necessary in certain cases. Also it made it way to easy
for the caller to pass around and convert into different values. This commit
splits `ZeroCopyBuf` into `JsBuffer` (an array buffer coming from V8) and
`ToJsBuffer` (a Rust buffer that will be converted into a V8 array buffer).
As a result some magical conversions were removed (they were never used)
limiting the API surface and preparing for changes in #19534.