This is a follow-on to the earlier work in reducing string copies,
mainly focused on ensuring that ASCII strings are easy to provide to the
JS runtime.
While we are replacing a 16-byte reference in a number of places with a
24-byte structure (measured via `std::mem::size_of`), the reduction in
copies wins out over the additional size of the arguments passed into
functions.
Benchmarking shows approximately the same if not slightly less wallclock
time/instructions retired, but I believe this continues to open up
further refactoring opportunities.
Welcome to better optimised op calls! Currently opSync is called with parameters of every type and count. This most definitely makes the call megamorphic. Additionally, it seems that spread params leads to V8 not being able to optimise the calls quite as well (apparently Fast Calls cannot be used with spread params).
Monomorphising op calls should lead to some improved performance. Now that unwrapping of sync ops results is done on Rust side, this is pretty simple:
```
opSync("op_foo", param1, param2);
// -> turns to
ops.op_foo(param1, param2);
```
This means sync op calls are now just directly calling the native binding function. When V8 Fast API Calls are enabled, this will enable those to be called on the optimised path.
Monomorphising async ops likely requires using callbacks and is left as an exercise to the reader.
This commit renames "JsRuntime::execute" to "JsRuntime::execute_script". Additionally
same renames were applied to methods on "deno_runtime::Worker" and
"deno_runtime::WebWorker".
A new macro was added to "deno_core" called "located_script_name" which
returns the name of Rust file alongside line no and col no of that call site.
This macro is useful in combination with "JsRuntime::execute_script"
and allows to provide accurate place where "one-off" JavaScript scripts
are executed for internal runtime functions.
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
Even if bootstrapping the JS runtime is low level, it's an abstraction leak of
core to require users to call `Deno.core.ops()` in JS space.
So instead we're introducing a `JsRuntime::sync_ops_cache()` method,
once we have runtime extensions a new runtime will ensure the ops
cache is setup (for the provided extensions) and then loading/unloading
plugins should be the only operations that require op cache syncs
- register builtin v8 errors in core.js so consumers don't have to
- remove complexity of error args handling (consumers must provide a
constructor with custom args, core simply provides msg arg)
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This PR makes json_op_sync/async generic to all Deserialize/Serialize types
instead of the loosely-typed serde_json::Value. Since serde_json::Value
implements Deserialize/Serialize, very little existing code needs to be updated,
however as json_op_sync/async are now generic, type inference is broken in some
cases (see cli/build.rs:146). I've found this reduces a good bit of boilerplate,
as seen in the updated deno_core examples.
This change may also reduce serialization and deserialization overhead as serde
has a better idea of what types it is working with. I am currently working on
benchmarks to confirm this and I will update this PR with my findings.