V8's JIT can do a better job knowing the argument count and also enable
fast call path (in future).
This also lets us call async ops without `opAsync`:
```js
const { ops } = Deno.core;
await ops.op_void_async();
```
this patch: 4405286 ops/sec
main: 3508771 ops/sec
This commit adds a new op_write_all to core that allows writing an
entire chunk in a single async op call. Internally this calls
`Resource::write_all`.
The `writableStreamForRid` has been moved to `06_streams.js` now, and
uses this new op. Various other code paths now also use this new op.
Closes #16227
This commit introduces two new buffer wrapper types to `deno_core`. The
main benefit of these new wrappers is that they can wrap a number of
different underlying buffer types. This allows for a more flexible read
and write API on resources that will require less copying of data
between different buffer representations.
- `BufView` is a read-only view onto a buffer. It can be backed by
`ZeroCopyBuf`, `Vec<u8>`, and `bytes::Bytes`.
- `BufViewMut` is a read-write view onto a buffer. It can be cheaply
converted into a `BufView`. It can be backed by `ZeroCopyBuf` or
`Vec<u8>`.
Both new buffer views have a cursor. This means that the start point of
the view can be constrained to write / read from just a slice of the
view. Only the start point of the slice can be adjusted. The end point
is fixed. To adjust the end point, the underlying buffer needs to be
truncated.
Readable resources have been changed to better cater to resources that
do not support BYOB reads. The basic `read` method now returns a
`BufView` instead of taking a `ZeroCopyBuf` to fill. This allows the
operation to return buffers that the resource has already allocated,
instead of forcing the caller to allocate the buffer. BYOB reads are
still very useful for resources that support them, so a new `read_byob`
method has been added that takes a `BufViewMut` to fill. `op_read`
attempts to use `read_byob` if the resource supports it, which falls
back to `read` and performs an additional copy if it does not. For
Rust->JS reads this change should have no impact, but for Rust->Rust
reads, this allows the caller to avoid an additional copy in many
scenarios. This combined with the support for `BufView` to be backed by
`bytes::Bytes` allows us to avoid one data copy when piping from a
`fetch` response into an `ext/http` response.
Writable resources have been changed to take a `BufView` instead of a
`ZeroCopyBuf` as an argument. This allows for less copying of data in
certain scenarios, as described above. Additionally a new
`Resource::write_all` method has been added that takes a `BufView` and
continually attempts to write the resource until the entire buffer has
been written. Certain resources like files can override this method to
provide a more efficient `write_all` implementation.
Welcome to better optimised op calls! Currently opSync is called with parameters of every type and count. This most definitely makes the call megamorphic. Additionally, it seems that spread params leads to V8 not being able to optimise the calls quite as well (apparently Fast Calls cannot be used with spread params).
Monomorphising op calls should lead to some improved performance. Now that unwrapping of sync ops results is done on Rust side, this is pretty simple:
```
opSync("op_foo", param1, param2);
// -> turns to
ops.op_foo(param1, param2);
```
This means sync op calls are now just directly calling the native binding function. When V8 Fast API Calls are enabled, this will enable those to be called on the optimised path.
Monomorphising async ops likely requires using callbacks and is left as an exercise to the reader.
Streamlines a common middleware pattern and provides foundations for avoiding variably sized v8::ExternalReferences & enabling fully monomorphic op callpaths
This allows resources to be "streams" by implementing read/write/shutdown. These streams are implicit since their nature (read/write/duplex) isn't known until called, but we could easily add another method to explicitly tag resources as streams.
`op_read/op_write/op_shutdown` are now builtin ops provided by `deno_core`
Note: this current implementation is simple & straightforward but it results in an additional alloc per read/write call
Closes #12556
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
This commit renames "JsRuntime::execute" to "JsRuntime::execute_script". Additionally
same renames were applied to methods on "deno_runtime::Worker" and
"deno_runtime::WebWorker".
A new macro was added to "deno_core" called "located_script_name" which
returns the name of Rust file alongside line no and col no of that call site.
This macro is useful in combination with "JsRuntime::execute_script"
and allows to provide accurate place where "one-off" JavaScript scripts
are executed for internal runtime functions.
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
This commit moves implementation of "JsRuntimeInspector" to "deno_core" crate.
To achieve that following changes were made:
* "Worker" and "WebWorker" no longer own instance of "JsRuntimeInspector",
instead it is now owned by "deno_core::JsRuntime".
* Consequently polling of inspector is no longer done in "Worker"/"WebWorker",
instead it's done in "deno_core::JsRuntime::poll_event_loop".
* "deno_core::JsRuntime::poll_event_loop" and "deno_core::JsRuntime::run_event_loop",
now accept "wait_for_inspector" boolean that tells if event loop should still be
"pending" if there are active inspector sessions - this change fixes the problem
that inspector disconnects from the frontend and process exits once the code has
stopped executing.
Even if bootstrapping the JS runtime is low level, it's an abstraction leak of
core to require users to call `Deno.core.ops()` in JS space.
So instead we're introducing a `JsRuntime::sync_ops_cache()` method,
once we have runtime extensions a new runtime will ensure the ops
cache is setup (for the provided extensions) and then loading/unloading
plugins should be the only operations that require op cache syncs
- register builtin v8 errors in core.js so consumers don't have to
- remove complexity of error args handling (consumers must provide a
constructor with custom args, core simply provides msg arg)
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit moves implementation of bin ops to "deno_core" crates
as well as unifying logic between bin ops and json ops to reuse
as much code as possible (both in Rust and JavaScript).
This PR makes json_op_sync/async generic to all Deserialize/Serialize types
instead of the loosely-typed serde_json::Value. Since serde_json::Value
implements Deserialize/Serialize, very little existing code needs to be updated,
however as json_op_sync/async are now generic, type inference is broken in some
cases (see cli/build.rs:146). I've found this reduces a good bit of boilerplate,
as seen in the updated deno_core examples.
This change may also reduce serialization and deserialization overhead as serde
has a better idea of what types it is working with. I am currently working on
benchmarks to confirm this and I will update this PR with my findings.
Fixes the following runtime error for me when benchmarking:
thread 'main' panicked at 'called `Result::unwrap()` on an `Err`
value: Error: Unregistered error class: "Error"
Connection reset by peer (os error 104)
Classes of errors returned from ops should be registered via
Deno.core.registerErrorClass().
at processResponse (deno:core/core.js:219:13)
at Object.jsonOpAsync (deno:core/core.js:240:12)
at async read (http_bench_json_ops.js:29:21)
at async serve (http_bench_json_ops.js:45:19)',
core/examples/http_bench_json_ops.rs:260:28