When streaming a resource in ext/http, with compression enabled, we
didn't flush individual chunks. This became very problematic when we
enabled `req.body` from `fetch` for FastStream recently.
This commit now correctly flushes each resource chunk after compression.
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
<!--
Before submitting a PR, please read http://deno.land/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
-->
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
This commit introduces two new buffer wrapper types to `deno_core`. The
main benefit of these new wrappers is that they can wrap a number of
different underlying buffer types. This allows for a more flexible read
and write API on resources that will require less copying of data
between different buffer representations.
- `BufView` is a read-only view onto a buffer. It can be backed by
`ZeroCopyBuf`, `Vec<u8>`, and `bytes::Bytes`.
- `BufViewMut` is a read-write view onto a buffer. It can be cheaply
converted into a `BufView`. It can be backed by `ZeroCopyBuf` or
`Vec<u8>`.
Both new buffer views have a cursor. This means that the start point of
the view can be constrained to write / read from just a slice of the
view. Only the start point of the slice can be adjusted. The end point
is fixed. To adjust the end point, the underlying buffer needs to be
truncated.
Readable resources have been changed to better cater to resources that
do not support BYOB reads. The basic `read` method now returns a
`BufView` instead of taking a `ZeroCopyBuf` to fill. This allows the
operation to return buffers that the resource has already allocated,
instead of forcing the caller to allocate the buffer. BYOB reads are
still very useful for resources that support them, so a new `read_byob`
method has been added that takes a `BufViewMut` to fill. `op_read`
attempts to use `read_byob` if the resource supports it, which falls
back to `read` and performs an additional copy if it does not. For
Rust->JS reads this change should have no impact, but for Rust->Rust
reads, this allows the caller to avoid an additional copy in many
scenarios. This combined with the support for `BufView` to be backed by
`bytes::Bytes` allows us to avoid one data copy when piping from a
`fetch` response into an `ext/http` response.
Writable resources have been changed to take a `BufView` instead of a
`ZeroCopyBuf` as an argument. This allows for less copying of data in
certain scenarios, as described above. Additionally a new
`Resource::write_all` method has been added that takes a `BufView` and
continually attempts to write the resource until the entire buffer has
been written. Certain resources like files can override this method to
provide a more efficient `write_all` implementation.
This is the release commit being forwarded back to main for 1.26.1
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.26.1 && git checkout -b forward_v1.26.1 upstream/forward_v1.26.1
```
Don't need this PR? Close it.
cc @cjihrig
Co-authored-by: cjihrig <cjihrig@users.noreply.github.com>
This commit adds a fast path to `Request` and `Response` that
make consuming request bodies much faster when using `Body#text`,
`Body#arrayBuffer`, and `Body#blob`, if the body is a FastStream.
Because the response bodies for `fetch` are FastStream, this speeds up
consuming `fetch` response bodies significantly.
We can use Resource::read_return & op_read instead. This allows HTTP
request bodies to participate in FastStream.
To make this work, `readableStreamForRid` required a change to allow non
auto-closing resources to be handled. This required some minor changes
in our FastStream paths in ext/http and ext/flash.
This commit splits `Deno.upgradeHttp` into two different APIs, because
the same API is currently overloaded with two different functions. Flash
requests upgrade immediately, with no need to return a `Response`
object. Instead you have to manually write the response to the socket.
Hyper requests only upgrade once a `Response` object has been sent.
These two behaviours are now split into `Deno.upgradeHttp` and
`Deno.upgradeHttpRaw`. The latter is flash only. The former only
supports hyper requests at the moment, but can be updated to support
flash in the future.
Additionally this removes `void | Promise<void>` as valid return types
for the handler function. If one wants to use `Deno.upgradeHttpRaw`,
they will have to type cast the handler signature - the signature is
meant for the 99.99%, and should not be complicated for the 0.01% that
use `Deno.upgradeHttpRaw()`.
Welcome to better optimised op calls! Currently opSync is called with parameters of every type and count. This most definitely makes the call megamorphic. Additionally, it seems that spread params leads to V8 not being able to optimise the calls quite as well (apparently Fast Calls cannot be used with spread params).
Monomorphising op calls should lead to some improved performance. Now that unwrapping of sync ops results is done on Rust side, this is pretty simple:
```
opSync("op_foo", param1, param2);
// -> turns to
ops.op_foo(param1, param2);
```
This means sync op calls are now just directly calling the native binding function. When V8 Fast API Calls are enabled, this will enable those to be called on the optimised path.
Monomorphising async ops likely requires using callbacks and is left as an exercise to the reader.
stream shutdown wasn't happening correctly (moved it to call op_http_shutdown) & extra zeroed bytes were being sent for when body length not a multiple of 64*1024
This commit adds "Deno.upgradeHttp" API, which
allows to "hijack" connection and switch protocols, to eg.
implement WebSocket required for Node compat.
Co-authored-by: crowlkats <crowlkats@toaxl.com>
Co-authored-by: Ryan Dahl <ry@tinyclouds.org>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
GET/HEAD requests can't have bodies according to `fetch` spec. This
commit changes the HTTP server to hide request bodies for requests with
GET or HEAD methods.
Avoid "blob:" prefix check on requests built in the http module since those can never be blob objects
Reduces cost of `newInnerRequest()` from 20ms to 0.1ms in my profiled run on ~2.5M reqs
Our oneshot receiver in `HyperService::call` would unwrap and panic, the `.await` on the oneshot receiver happens when the sender is dropped.
The sender is dropped in `op_http_response` because:
1. We take `ResponseSenderResource`
2. Then get `ConnResource` and early exit on failure (conn already closed)
3. The taken sender then gets dropped in this early exit before any response is sent over the channel
Fallbacking to returning a dummy response to hyper seems to be a fine quickfix
Check for expected headers more rigorously and check that it's a
HTTP/1.1 GET request. The logic mirrors what Deno Deploy and the
tungstenite crate do.
The presence of "Sec-Websocket-Version: 13" is now also enforced.
I don't expect that to break anything: conforming clients already
send it and tungstenite can't talk to older clients anyway.
The new code is more efficient due to heap-allocating less and aligns
more closely with the checks in ext/http/01_http.js now.
This commit adds a test case for "Http: connection closed before
message completed" error as well as fixing an edge with resource
leak when the error is raised.
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
cleanup(ext/http): simplify cookie header handling
Use `Vec::join` instead of essentially reimplementing it. There should be no meaningful performance delta