This commit adds "Deno.Conn.ref()" and "Deno.Conn.unref()" methods.
These methods can be used to make connection block or not block the
event loop from finishing. Refing/unrefing only influences "read"
operations - ie. scheduling writes to a connection _do_ keep event
loop alive.
Required for https://github.com/denoland/deno/issues/16710
Previously, errored streaming response bodies did not cause the HTTP
stream to be aborted. It instead caused the stream to be closed gracefully,
which had the result that the client could not detect the difference
between a successful response and an errored response.
This commit fixes the issue by aborting the stream on error.
Right now an error in a request body stream causes an uncatchable
global promise rejection. This PR fixes this to instead propagate the
error correctly into the promise returned from `fetch`.
It additionally fixes errored readable stream bodies being treated as
successfully completed bodies by Rust.
In our `require()` implementation we use a special logic to resolve
"base path" when looking for matching packages, however this logic
is in contradiction to what needs to happen if there's a local
"node_modules"
directory used. This commit changes require implementation to be aware
if we're running off of global node modules cache or a local one.
- [x] `dlfcn.rs` - `dlopen()`-related code.
- [x] `turbocall.rs` - Call trampoline JIT compiler.
- [x] `repr.rs` - Pointer representation. Home of the UnsafePointerView
ops.
- [x] `symbol.rs` - Function symbol related code.
- [x] `callback.rs` - Home of `Deno.UnsafeCallback` ops.
- [x] `ir.rs` - Intermediate representation for values. Home of the
`NativeValue` type.
- [x] `call.rs` - Generic call ops. Home to everything related to
calling FFI symbols.
- [x] `static.rs` - static symbol support
I find easier to work with this setup, I eventually want to expand
TurboCall to unroll type conversion loop in generic calls, generate code
for individual symbols (lazy function pointers), etc.
Previously the inner request object of the original and the new request
were the same, causing the requests to be entangled and mutable changes
to one to be visible to the other. This fixes that.
Fixes https://github.com/denoland/deno/issues/16934
Example compiler error:
```
error: mutable opstate is not supported in async ops
--> core/ops_builtin.rs:122:1
|
122 | #[op]
| ^^^^^
|
= note: this error originates in the attribute macro `op` (in Nightly builds, run with -Z macro-backtrace for more info)
```
Uses SeqOneByteString optimization to do zero-copy `&str` arguments in
fast calls.
- [x] Depends on https://github.com/denoland/rusty_v8/pull/1129
- [x] Depends on
https://chromium-review.googlesource.com/c/v8/v8/+/4036884
- [x] Disable in async ops
- [x] Make it work with owned `String` with an extra alloc in fast path.
- [x] Support `Cow<'_, str>`. Owned for slow case, Borrowed for fast
case
```rust
#[op]
fn op_string_len(s: &str) -> u32 {
str.len() as u32
}
```
* Introduces `ReadableStreamDefaultReadResult` and modifies
`ReadableStreamDefaultReader.read` to return this type (closes #15269).
* Adds the missing `ReadableStreamBYOBReader` constructor.
* Removes the nonexistent `ReadableStreamReader` class.
Currently, slow call path will always create a dangling pointer to
replace a null pointer when called with eg. a `new Uint8Array()`
parameter, which V8 initialises as a null pointer backed buffer.
However, the fast call path will never change the pointer value and will
thus expose a null pointer. Thus, it's possible that the pointer value
that a native call sees coming from Deno changes between two sequential
invocations of the same function with the exact same parameters.
Since null pointers can be quite important, and `Uint8Array` is the
chosen fast path for Deno FFI `"buffer"` parameters, I think it is
fairly important that the null pointer be properly exposed to the native
code. Thus this PR.
### `*mut c_void`
While here, I also changed the type of our pointer values to `*mut
c_void`. This is mainly due to JS buffers always being `*mut`, and
because we offer a way to turn a pointer into a JS `ArrayBuffer`
(`op_ffi_get_buf`) which is read-write. I'm not exactly sure which way
we should really go here, we have pointers that are definitely mut but
we also cannot assume all of our pointers are. So, do we go with the
maxima or the minima?
### `optimisedCall(new Uint8Array())`
V8 seems to have a bug where calling an optimised function with a newly
created empty `Uint8Array` (no argument or 0) will not see the data
pointer being null but instead it's some stable pointer, perhaps
pointing to some internal null-backing-store. The pointer value is also
an odd (not even) number, so it might specifically be a tagged pointer.
This will probably be an issue for some users, if they try to use eg.
`method(cstr("something"), new Uint8Array())` as a way to do a fast call
to `method` with a null pointer as the second parameter.
If instead of a `new Uint8Array()` the user instead uses some `const
NULL = new Uint8Array()` where the `NULL` buffer has been passed to a
slow call previously, then the fast call will properly see a null
pointer.
I'll take this up with some V8 engineers to see if this couldn't be
fixed.
…ed promises in mind (#16616)"
This reverts commit fd023cf793.
There are reports saying that Vite is often hanging in 1.28.2 and this
is
the only PR that changed something with HTTP server. I think we should
hold off on trying to fix this and instead focus on #16787
CC @magurotuna
This PR resets the revert commit made by #16610, bringing back #16383
which attempts to fix the issue happening when we use the flash server
with `--watch` option enabled.
Also, some code changes are made to pass the regression test added in
#16610.
For CommonJS packages we were not trying different extensions for files
specified as subpath of the package ([package_name]/[subpath]).
This commit fixes that.
**This patch**
```
benchmark time (avg) (min … max) p75 p99 p995
------------------------------------------------- -----------------------------
echo deno 23.99 ms/iter (22.51 ms … 33.61 ms) 23.97 ms 33.61 ms 33.61 ms
cat 16kb 24.27 ms/iter (22.5 ms … 35.21 ms) 24.2 ms 35.21 ms 35.21 ms
cat 1mb 25.88 ms/iter (25.04 ms … 30.28 ms) 26.12 ms 30.28 ms 30.28 ms
cat 15mb 38.41 ms/iter (35.7 ms … 50 ms) 38.31 ms 50 ms 50 ms
```
**main**
```
benchmark time (avg) (min … max) p75 p99 p995
------------------------------------------------- -----------------------------
echo deno 35.66 ms/iter (34.53 ms … 41.84 ms) 35.79 ms 41.84 ms 41.84 ms
cat 16kb 35.99 ms/iter (34.52 ms … 44.94 ms) 36.05 ms 44.94 ms 44.94 ms
cat 1mb 38.68 ms/iter (36.67 ms … 50.44 ms) 37.95 ms 50.44 ms 50.44 ms
cat 15mb 48.4 ms/iter (46.19 ms … 58.41 ms) 49.16 ms 58.41 ms 58.41 ms
```