1
0
Fork 0
mirror of https://github.com/denoland/deno.git synced 2024-11-27 16:10:57 -05:00
Commit graph

13 commits

Author SHA1 Message Date
Divy Srivastava
b482a50299
feat(ext/http): abort event when request is cancelled (#26781)
```js
Deno.serve(async (req) => {
  const { promise, resolve } = Promise.withResolvers<void>();

  req.signal.addEventListener("abort", () => {
    resolve();
  });

  await promise;

  return new Response("Ok");
});
```
2024-11-08 18:46:11 +05:30
Leo Kettmeir
2c3900370a
refactor(ext/http): use concrete error types (#26377) 2024-10-18 15:57:12 -07:00
Nathan Whitaker
930ccf928a
perf(ext/http): Reduce size of ResponseBytesInner (#24840)
I noticed
[`set_response_body`](ce42f82b5a/ext/http/service.rs (L439-L443))
was unexpectedly hot in profiles, with most of the time being spent in
`memmove`.

It turns out that `ResponseBytesInner` was _massive_ (5624 bytes), so
every time we moved a `ResponseBytesInner` (for instance in
`set_response_body`) we were doing a >5kb memmove, which adds up pretty
quickly.

This PR boxes the two larger variants (the compression streams),
shrinking `ResponseBytesInner` to a reasonable 48 bytes.

---
  Benchmarked with a simple hello world server:
```ts
// hello-server.ts
Deno.serve((_req) => {
  return new Response("Hello world");
});
// run with `deno run -A hello-server.ts`
// in separate terminal `wrk -d 10s http://127.0.0.1:8000`
```

Main:
```
Running 10s test @ http://127.0.0.1:8000/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    53.39us    9.53us   0.98ms   92.78%
    Req/Sec    86.57k     3.56k   91.58k    91.09%
  1739319 requests in 10.10s, 248.81MB read
Requests/sec: 172220.92
Transfer/sec:     24.64MB
```

This PR:
```
Running 10s test @ http://127.0.0.1:8000/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    45.44us    8.49us   0.91ms   90.04%
    Req/Sec   100.65k     2.26k  102.65k    96.53%
  2022296 requests in 10.10s, 289.29MB read
Requests/sec: 200226.20
Transfer/sec:     28.64MB
```

So a nice ~15% bump. (With response body compression, the gain is ~10%
for gzip and neutral for brotli)
2024-08-02 00:30:26 +00:00
Matt Mastracci
eed2598e6c
feat(ext/http): Implement request.signal for Deno.serve (#23425)
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
2024-04-24 14:03:37 -04:00
林炳权
9304126be5
chore: update to Rust 1.77.2 (#23262)
update to Rust 1.77.2


---------

Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2024-04-10 22:08:23 +00:00
David Sherret
7e72f3af61
chore: update copyright to 2024 (#21753) 2024-01-01 19:58:21 +00:00
Bartek Iwańczuk
c2414db1f6
refactor: simplify hyper, http, h2 deps (#21715)
Main change is that:
- "hyper" has been renamed to "hyper_v014" to signal that it's legacy
- "hyper1" has been renamed to "hyper" and should be the default
2023-12-27 11:59:57 -05:00
Bartek Iwańczuk
f86456fc26
chore: update ext/http to hyper 1.0.1 and http 1.0 (#21588)
Closes https://github.com/denoland/deno/issues/21583.
2023-12-22 01:54:28 +01:00
Matt Mastracci
8ce4ddc407
chore(ext/http): fix E0446 on some compiler versions (#21362)
Rust 1.74 may have made this code temporarily valid in [#113126 Replace
old private-in-public diagnostic with type privacy
lints](https://github.com/rust-lang/rust/pull/113126), so we didn't
catch it at build time.

It fails in 1.73 and +nightly, however.
2023-11-27 23:12:55 +00:00
Matt Mastracci
68a0877f8d
fix(ext/http): avoid lockup in graceful shutdown (#21253)
Follow-up to #20822. cc @lrowe 

The `httpServerExplicitResourceManagement` tests were randomly failing
on CI because of a race.

The `drain` waker was missing wakeup events if the listeners shut down
after the last HTTP response finished. If we lost the race (rare), the
server Rc would be dropped and we wouldn't poll it again.

This replaces the drain waker system with a signalling Rc that always
resolves when the refcount is about to become 1.

Fix verified by running serve tests in a loop:

```
for i in {0..100}; do cargo run --features=__http_tracing -- test
 -A --unstable '/Users/matt/Documents/github/deno/deno/cli/tests/unit/ser
ve_test.ts' --filter httpServerExplicitResourceManagement; done;
```
2023-11-23 16:39:17 +00:00
Laurence Rowe
e5819777c3
refactor(ext/http): Use HttpRecord as response body to track until body completion (#20822)
Use HttpRecord as response body so requests can be tracked all the way
to response body completion.

This allows Request properties to be accessed while the response body is
streaming.

Graceful shutdown now awaits a future instead of async spinning waiting
for requests to finish.

On the minimal benchmark this refactor improves performance an
additional 2% over pooling alone for a net 3% increase over the previous
deno main branch.

Builds upon https://github.com/denoland/deno/pull/20809 and
https://github.com/denoland/deno/pull/20770.

---------

Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-11-13 19:17:31 +00:00
Laurence Rowe
25950baed3
perf(ext/http): Object pooling for HttpRecord and HeaderMap (#20809)
Reuse existing existing allocations for HttpRecord and response
HeaderMap where possible.

At request end used allocations are returned to the pool and the pool
and the pool sized to 1/8th the current number of inflight requests.

For http1 hyper will reuse the response HeaderMap for the following
request on the connection.

Builds upon https://github.com/denoland/deno/pull/20770

---------

Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-11-13 10:32:34 -07:00
Laurence Rowe
542314a0be
refactor(ext/http): refer to HttpRecord directly using v8::External (#20770)
Makes the JavaScript Request use a v8:External opaque pointer to
directly refer to the Rust HttpRecord.

The HttpRecord is now reference counted. To avoid leaks the strong count
is checked at request completion.

Performance seems unchanged on the minimal benchmark. 118614 req/s this
branch vs 118564 req/s on main, but variance between runs on my laptop
is pretty high.

---------

Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-11-13 07:04:49 -07:00