This allows resources to be "streams" by implementing read/write/shutdown. These streams are implicit since their nature (read/write/duplex) isn't known until called, but we could easily add another method to explicitly tag resources as streams.
`op_read/op_write/op_shutdown` are now builtin ops provided by `deno_core`
Note: this current implementation is simple & straightforward but it results in an additional alloc per read/write call
Closes #12556
This adds `.code` attributes to errors returned by the op-layer, facilitating classifying OS errors and helping node-compat.
Similar to Node, these `.code` attributes are stringified names of unix ERRNOs, the mapping tables are generated by [tools/codegen_error_codes.js](https://gist.github.com/AaronO/dfa1106cc6c7e2a6ebe4dba9d5248858) and derived from libuv and rust's std internals
Currently all async ops are polled lazily, which means that op
initialization code is postponed until control is yielded to the event
loop. This has some weird consequences, e.g.
```js
let listener = Deno.listen(...);
let conn_promise = listener.accept();
listener.close();
// `BadResource` is thrown. A reasonable error would be `Interrupted`.
let conn = await conn_promise;
```
JavaScript promises are expected to be eagerly evaluated. This patch
makes ops actually do that.
WebAssembly modules compiled through `WebAssembly.compile()` and similar
non-streaming APIs don't have a URL associated to them, because they
have been compiled from a buffer source. In stack traces, V8 will use
a URL such as `wasm://wasm/d1c677ea`, with a hash of the module.
However, wasm modules compiled through streaming APIs, like
`WebAssembly.compileStreaming()`, do have a known URL, which can be
obtained from the `Response` object passed into the streaming APIs. And
as per the developer-facing display conventions in the WebAssembly
Web API spec, this URL should be used in stack traces. This change
implements that.
Decouple JsRuntime::sync_ops_cache() from the availability of the Deno.* namespace in the global scope
This avoids crashes when calling sync_ops_cache() on a bootstrapped WebWorker who has dropped its Deno.* namespace
It's also just cleaner and more robust ...
This commit fixes a problem where loading and executing multiple
modules leads to all of the having "import.meta.main" set to true.
Following Rust APIs were deprecated:
- deno_core::JsRuntime::load_module
- deno_runtime::Worker::execute_module
- deno_runtime::WebWorker::execute_module
Following Rust APIs were added:
- deno_core::JsRuntime::load_main_module
- deno_core::JsRuntime::load_side_module
- deno_runtime::Worker::execute_main_module
- deno_runtime::Worker::execute_side_module
- deno_runtime::WebWorker::execute_main_module
Trying to load multiple "main" modules into the runtime now results in an
error. If user needs to load additional "non-main" modules they should use
APIs for "side" module.
Async WebAssembly compilation was implemented by adding two
bindings: `set_wasm_streaming_callback`, which registered a callback to
be called whenever a streaming wasm compilation was started, and
`wasm_streaming_feed`, which let the JS callback modify the state of the
v8 wasm compiler.
`set_wasm_streaming_callback` cannot currently be implemented as
anything other than a binding, but `wasm_streaming_feed` does not really
need to use anything specific to bindings, and could indeed be
implemented as one or more ops. This PR does that, resulting in a
simplification of the relevant code.
There are three operations on the state of the v8 wasm compiler that
`wasm_streaming_feed` allowed: feeding new bytes into the compiler,
letting it know that there are no more bytes coming from the network,
and aborting the compilation. This PR provides `op_wasm_streaming_feed`
to feed new bytes into the compiler, and `op_wasm_streaming_abort` to
abort the compilation. It doesn't provide an op to let v8 know that the
response is finished, but closing the resource with `Deno.core.close()`
will achieve that.
Because it was possible to disable those with a runtime flag, they were
not available through primordials. The flag has since been removed
upstream.
Refs: d59db06bf5
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
Oneshot is more appropriate because mod_evaluate() only sends a single
value.
It also makes it easier to use it correctly. As an embedder, I wasn't
sure if I'm expected to drain the channel or not.
This commit changes return type of JsRuntime::execute_script to include
v8::Value returned from evaluation.
When embedding deno_core it is sometimes useful to be able to inspect
script evaluation value without the hoops of adding ops to store the
value on the OpState.
v8::Global<v8::Value> is used so consumers don't have to pass
scope themselves.
The WebAssembly streaming APIs used to be enabled, but used to take
buffer sources as their first argument (see #6154 and #7259). This
change re-enables them, requiring a Promise<Response> instead, as well as
enabling asynchronous compilation of WebAssembly modules.
This commit introduces primordials to deno_core. Primordials are a
frozen set of all intrinsic objects in the runtime. They are not
vulnerable to prototype pollution.
This commit changes implementation of module loading in "deno_core"
to track all currently fetched modules across all existing module loads.
In effect a bug that caused concurrent dynamic imports referencing the
same module to fail is fixed.
This commits moves implementation of net related APIs available on "Deno"
namespace to "deno_net" extension.
Following APIs were moved:
- Deno.listen()
- Deno.connect()
- Deno.listenTls()
- Deno.serveHttp()
- Deno.shutdown()
- Deno.resolveDns()
- Deno.listenDatagram()
- Deno.startTls()
- Deno.Conn
- Deno.Listener
- Deno.DatagramConn
This commit adds support for piping console messages to inspector.
This is done by "wrapping" Deno's console implementation with default
console provided by V8 by the means of "Deno.core.callConsole" binding.
Effectively each call to "console.*" methods calls a method on Deno's
console and V8's console.
Starting with V8 9.1, top-level-await is always enabled by default.
See https://v8.dev/blog/v8-release-91 for the release notes.
- Remove the now redundant v8 flag.
- Clarify doc comment and add link to the feature explainer.
This commit renames "JsRuntime::execute" to "JsRuntime::execute_script". Additionally
same renames were applied to methods on "deno_runtime::Worker" and
"deno_runtime::WebWorker".
A new macro was added to "deno_core" called "located_script_name" which
returns the name of Rust file alongside line no and col no of that call site.
This macro is useful in combination with "JsRuntime::execute_script"
and allows to provide accurate place where "one-off" JavaScript scripts
are executed for internal runtime functions.
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
This commit changes module loading implementation in "deno_core"
to call "ModuleLoader::prepare" hook only once per entry point.
This is done to avoid multiple type checking of the same code
in case of duplicated dynamic imports.
Relevant code in "cli/module_graph.rs" was updated as well.
This commit removes all JS based text encoding / text decoding. Instead
encoding now happens in Rust via encoding_rs (already in tree). This
implementation retains stream support, but adds the last missing
encodings. We are incredibly close to 100% WPT on text encoding now.
This should reduce our baseline heap by quite a bit.
This speeds up incremental rebuild when only touching JS files by 13-15%
Rebuild time after `touch 01_broadcast_channel.js`:
main: run 1 49.18s, run 2 50.34s
this: run 1 43.12s, run 2 43.19s
This commit moves implementation of "JsRuntimeInspector" to "deno_core" crate.
To achieve that following changes were made:
* "Worker" and "WebWorker" no longer own instance of "JsRuntimeInspector",
instead it is now owned by "deno_core::JsRuntime".
* Consequently polling of inspector is no longer done in "Worker"/"WebWorker",
instead it's done in "deno_core::JsRuntime::poll_event_loop".
* "deno_core::JsRuntime::poll_event_loop" and "deno_core::JsRuntime::run_event_loop",
now accept "wait_for_inspector" boolean that tells if event loop should still be
"pending" if there are active inspector sessions - this change fixes the problem
that inspector disconnects from the frontend and process exits once the code has
stopped executing.
This commit moves bulk of the logic related to module loading
from "JsRuntime" to "ModuleMap".
Next steps are to rewrite the actual loading logic (represented by
"RecursiveModuleLoad") to be a part of "ModuleMap" as well --
that way we will be able to track multiple module loads from within
the map which should help me solve the problem of concurrent
loads (since all info about currently loading/loaded modules will
be contained in the ModuleMap, so we'll be able to know if actually
all required modules have been loaded).
This ensures that provided extensions are all correctly setup and ready to use once the JsRuntime constructor returns
Note: this will also initialize ops for to-be-snapshotted runtimes
Extensions allow declarative extensions to "JsRuntime" (ops, state, JS or middleware).
This allows for:
- `op_crates` to be plug-and-play & self-contained, reducing complexity leaked to consumers
- op middleware (like metrics_op) to be opt-in and for new middleware (unstable, tracing,...)
- `MainWorker` and `WebWorker` to be composable, allowing users to extend workers with their ops whilst benefiting from the other infrastructure (inspector, etc...)
In short extensions improve deno's modularity, reducing complexity and leaky abstractions for embedders and the internal codebase.
General cleanup of module loading code, tried to reduce indentation in various methods
on "JsRuntime" to improve readability.
Added "JsRuntime::handle_scope" helper function, which returns a "v8::HandleScope".
This was done to reduce a code pattern that happens all over the "deno_core".
Additionally if event loop hangs during loading of dynamic modules a list of
currently pending dynamic imports is printed.
`InvalidDNSNameError` is thrown when a string is not a valid hostname,
e.g. it contains invalid characters, or starts with a numeric digit. It
does not involve a (failed) DNS lookup.
Even if bootstrapping the JS runtime is low level, it's an abstraction leak of
core to require users to call `Deno.core.ops()` in JS space.
So instead we're introducing a `JsRuntime::sync_ops_cache()` method,
once we have runtime extensions a new runtime will ensure the ops
cache is setup (for the provided extensions) and then loading/unloading
plugins should be the only operations that require op cache syncs
- register builtin v8 errors in core.js so consumers don't have to
- remove complexity of error args handling (consumers must provide a
constructor with custom args, core simply provides msg arg)
`init()` was previously needed to init the shared queue, but now that it's
gone `init()` only registers the async msg handler which is snapshot
safe and constant since the op layer refactor.
Also cleanup `bindings::deserialize()/decode()` so they
use the `ZeroCopyBuf` abstraction rather than reimplementing it.
This cleanup will facilitate moving `ZeroCopyBuf` to `serde_v8`
since it's now self contained and there are no other
`get_backing_store_slice()` callers.
This commit replaces GothamState's internal HashMap
with a BTreeMap to improve performance.
OpState/GothamState keys by TypeId which is essentially
an opaque u64. For small sets of integer keys BTreeMap
outperforms HashMap mainly since it removes the hashing
overhead and Eq/Comp on integer-like types is very cheap,
it should also have a smaller memory footprint.
We only use ~30 unique types and thus ~30 unique keys to
access OpState, so the keyset is small and immutable
throughout the life of a JsRuntime, there's no meaningful
churn in keys added/removed.
This is another optimization to help improve the baseline overhead
of async ops. It shaves off ~55ns/op or ~7% of the current total
async op overhead.
It achieves these gains by taking advantage of the sequential
nature of promise IDs and optimistically stores them sequentially
in a pre-allocated circular buffer and fallbacks to the promise Map
for slow to resolve promises.