I noticed
[`set_response_body`](ce42f82b5a/ext/http/service.rs (L439-L443))
was unexpectedly hot in profiles, with most of the time being spent in
`memmove`.
It turns out that `ResponseBytesInner` was _massive_ (5624 bytes), so
every time we moved a `ResponseBytesInner` (for instance in
`set_response_body`) we were doing a >5kb memmove, which adds up pretty
quickly.
This PR boxes the two larger variants (the compression streams),
shrinking `ResponseBytesInner` to a reasonable 48 bytes.
---
Benchmarked with a simple hello world server:
```ts
// hello-server.ts
Deno.serve((_req) => {
return new Response("Hello world");
});
// run with `deno run -A hello-server.ts`
// in separate terminal `wrk -d 10s http://127.0.0.1:8000`
```
Main:
```
Running 10s test @ http://127.0.0.1:8000/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 53.39us 9.53us 0.98ms 92.78%
Req/Sec 86.57k 3.56k 91.58k 91.09%
1739319 requests in 10.10s, 248.81MB read
Requests/sec: 172220.92
Transfer/sec: 24.64MB
```
This PR:
```
Running 10s test @ http://127.0.0.1:8000/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 45.44us 8.49us 0.91ms 90.04%
Req/Sec 100.65k 2.26k 102.65k 96.53%
2022296 requests in 10.10s, 289.29MB read
Requests/sec: 200226.20
Transfer/sec: 28.64MB
```
So a nice ~15% bump. (With response body compression, the gain is ~10%
for gzip and neutral for brotli)
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
Main change is that:
- "hyper" has been renamed to "hyper_v014" to signal that it's legacy
- "hyper1" has been renamed to "hyper" and should be the default
Rust 1.74 may have made this code temporarily valid in [#113126 Replace
old private-in-public diagnostic with type privacy
lints](https://github.com/rust-lang/rust/pull/113126), so we didn't
catch it at build time.
It fails in 1.73 and +nightly, however.
Follow-up to #20822. cc @lrowe
The `httpServerExplicitResourceManagement` tests were randomly failing
on CI because of a race.
The `drain` waker was missing wakeup events if the listeners shut down
after the last HTTP response finished. If we lost the race (rare), the
server Rc would be dropped and we wouldn't poll it again.
This replaces the drain waker system with a signalling Rc that always
resolves when the refcount is about to become 1.
Fix verified by running serve tests in a loop:
```
for i in {0..100}; do cargo run --features=__http_tracing -- test
-A --unstable '/Users/matt/Documents/github/deno/deno/cli/tests/unit/ser
ve_test.ts' --filter httpServerExplicitResourceManagement; done;
```
Use HttpRecord as response body so requests can be tracked all the way
to response body completion.
This allows Request properties to be accessed while the response body is
streaming.
Graceful shutdown now awaits a future instead of async spinning waiting
for requests to finish.
On the minimal benchmark this refactor improves performance an
additional 2% over pooling alone for a net 3% increase over the previous
deno main branch.
Builds upon https://github.com/denoland/deno/pull/20809 and
https://github.com/denoland/deno/pull/20770.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Reuse existing existing allocations for HttpRecord and response
HeaderMap where possible.
At request end used allocations are returned to the pool and the pool
and the pool sized to 1/8th the current number of inflight requests.
For http1 hyper will reuse the response HeaderMap for the following
request on the connection.
Builds upon https://github.com/denoland/deno/pull/20770
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Makes the JavaScript Request use a v8:External opaque pointer to
directly refer to the Rust HttpRecord.
The HttpRecord is now reference counted. To avoid leaks the strong count
is checked at request completion.
Performance seems unchanged on the minimal benchmark. 118614 req/s this
branch vs 118564 req/s on main, but variance between runs on my laptop
is pretty high.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>