This speeds up incremental rebuild when only touching JS files by 30%
compared to #10786.
Rebuild time after touch 01_broadcast_channel.js:
main: run 1 49.18s, run 2 50.34s
#10786: run 1 43.12s, run 2 43.19s
this + #10786: run 1 30.30s, run 2 30.95s
This commit moves implementation of "JsRuntimeInspector" to "deno_core" crate.
To achieve that following changes were made:
* "Worker" and "WebWorker" no longer own instance of "JsRuntimeInspector",
instead it is now owned by "deno_core::JsRuntime".
* Consequently polling of inspector is no longer done in "Worker"/"WebWorker",
instead it's done in "deno_core::JsRuntime::poll_event_loop".
* "deno_core::JsRuntime::poll_event_loop" and "deno_core::JsRuntime::run_event_loop",
now accept "wait_for_inspector" boolean that tells if event loop should still be
"pending" if there are active inspector sessions - this change fixes the problem
that inspector disconnects from the frontend and process exits once the code has
stopped executing.
This commit refactors implementation of inspector.
The intention is to be able to move inspector implementation to "deno_core".
Following things were done to make that possible:
* "runtime/inspector.rs" was split into "runtime/inspector/mod.rs"
and "runtime/inspector/server.rs", separating inspector implementation
from Websocket server implementation.
* "DenoInspector" was renamed to "JsRuntimeInspector" and reference to "server"
was removed from the structure, making it independent of Websocket server
used to connect to Chrome Devtools.
* "WebsocketSession" was renamed to "InspectorSession" and rewritten in such
a way that it's not tied to Websockets anymore; instead it accepts a pair
of "proxy" channel ends that allow to integrate the session with different
"transports".
* "InspectorSession" was renamed to "LocalInspectorSession" to better indicate
that it's an "in-memory" session and doesn't require Websocket server. It was
also rewritten in such a way that it uses "InspectorSession" from previous point
instead of reimplementing "v8::inspector::ChannelImpl" trait; this is done by using
the "proxy" channels to communicate with the V8 session.
Consequently "LocalInspectorSession" is now a frontend to "InspectorSession". This
introduces a small inconvenience that awaiting responses for "LocalInspectorSession" requires
to concurrently poll worker's event loop. This arises from the fact that "InspectorSession"
is now owned by "JsRuntimeInspector", which in turn is owned by "Worker" or "WebWorker".
To ease this situation "Worker::with_event_loop" helper method was added, that takes
a future and concurrently polls it along with the event loop (using "tokio::select!" macro
inside a loop).
Replaces the file-backed provider by an in-memory one because proper
file locking is a hard problem that detracts from the proof of concept.
Teach the WPT runner how to extract tests from .html files because all
the relevant tests in test_util/wpt/webmessaging/broadcastchannel are
inside basics.html and interface.html.
In #9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.
Fixes: #9692
Fixes: #10049
Fixes: #10296
Fixes: denoland/deno_std#750
Currently file passed to --config file is parsed using TsConfig structure
that does multiple things when loading the file. Instead of relying on that
structure I've introduced ConfigFile structure that can be updated to
sniff out more fields from the config file in the future.
This commit implements file watching for deno test.
When a file is changed, only the test modules which
use it as a dependency are rerun.
This is accomplished by reworking the file watching infrastructure
to pass the paths which have changed to the resolver, and then
constructing a module graph for each test module to check if it
contains any changed files.
`Deno.core.*` is unstable and not fit for public consumption, although this is a somewhat internal bench some people may use it as reference code and start using `Deno.core.encode()` in their own code
Makes the codebase more searchable and helps distinguish op functions from helper functions
Besides tests/examples/benches this pattern appears to be used everywhere else in the codebase
Fixes a pesky bug in the fetch implementation where if the init part is
specified in `fetch` instead of the `Request` constructor, the
fillHeaders function receives two references to the same object, causing
it to append to the same list being iterated over.
This commit adds support for running test in parallel.
Entire test runner functionality has been rewritten
from JavaScript to Rust and a set of ops was added to support reporting in Rust.
A new "--jobs" flag was added to "deno test" that allows to configure
how many threads will be used. When given no value it defaults to 2.
Extensions allow declarative extensions to "JsRuntime" (ops, state, JS or middleware).
This allows for:
- `op_crates` to be plug-and-play & self-contained, reducing complexity leaked to consumers
- op middleware (like metrics_op) to be opt-in and for new middleware (unstable, tracing,...)
- `MainWorker` and `WebWorker` to be composable, allowing users to extend workers with their ops whilst benefiting from the other infrastructure (inspector, etc...)
In short extensions improve deno's modularity, reducing complexity and leaky abstractions for embedders and the internal codebase.
General cleanup of module loading code, tried to reduce indentation in various methods
on "JsRuntime" to improve readability.
Added "JsRuntime::handle_scope" helper function, which returns a "v8::HandleScope".
This was done to reduce a code pattern that happens all over the "deno_core".
Additionally if event loop hangs during loading of dynamic modules a list of
currently pending dynamic imports is printed.
`InvalidDNSNameError` is thrown when a string is not a valid hostname,
e.g. it contains invalid characters, or starts with a numeric digit. It
does not involve a (failed) DNS lookup.
denort is an optimization to "deno compile" to produce slightly smaller
output. It's a decent idea, but causes a lot of negative side-effects:
- Deno's link time is a source of constant agony both locally and in CI,
denort doubles link time.
- The release process is a long and arduous undertaking with many manual
steps. denort necessitates an additional manual zip + upload from M1
apple computers.
- The "deno compile" interface is complicated with the "--lite" option.
This is confusing for uses ("why wouldn't you want lite?").
The benefits of this feature do not outweigh the negatives. We must find
a different approach to optimizing "deno compile" output.
This commits adds adds "permissions" option to the test definitions
which allows tests to run with different permission sets than
the process's permission.
The change will only be in effect within the test function, once the
test has completed the original process permission set is restored.
Test permissions cannot exceed the process's permission.
You can only narrow or drop permissions, failure to acquire a
permission results in an error being thrown and the test case will fail.
Even if bootstrapping the JS runtime is low level, it's an abstraction leak of
core to require users to call `Deno.core.ops()` in JS space.
So instead we're introducing a `JsRuntime::sync_ops_cache()` method,
once we have runtime extensions a new runtime will ensure the ops
cache is setup (for the provided extensions) and then loading/unloading
plugins should be the only operations that require op cache syncs
- register builtin v8 errors in core.js so consumers don't have to
- remove complexity of error args handling (consumers must provide a
constructor with custom args, core simply provides msg arg)
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit fixes the URL returned from `request.url` in the HTTP server
to be fully qualified. This previously existed, but was removed and
accidentially not readded during optimizations of the HTTP ops.
Returning a non fully qualified URL from `Request#url` is not spec
compliant.
The panic was caused by the lack of an error class mapping for
futures::channel::TrySendError, but it shouldn't have been throwing an error in
the first place - when a worker has terminated, postMessage should just return.
The issue was that the termination message hadn't yet been recieved, so it was
carrying on with trying to send the message. This adds another check on the Rust
side for if the channel is closed, and if it is the worker is treated as
terminated.
This commit aligns `Headers` to spec. It also removes the now unused
03_dom_iterable.js file. We now pass all relevant `Headers` WPT. We do
not implement any sort of header filtering, as we are a server side
runtime.
This is likely not the most efficient implementation of `Headers` yet.
It is however spec compliant. Once all the APIs in the `HTTP` hot loop
are correct we can start optimizing them. It is likely that this commit
reduces bench throughput temporarily.
Raise the soft limit to the hard limit when possible. This is similar
to what Node.js does to avoid running into "out of file descriptors"
errors too quickly.
On most Linux systems, raises the limit from 1,024 to 1,048,576.
On most macOS systems, raises the limit from 256 to 10,240.
Fixes #10148.
This stabilizes Deno.ftruncate and Deno.ftruncateSync.
This is a well known system call and the interface is
not going to change. Implicitly requires write permissions
as the file has to be opened with write to be truncated.
This commit adds allowlist support to `--allow-run` flag.
Additionally `Deno.permissions.query()` allows to query for specific
programs within allowlist.
This commit adds blob URL support. Blob URLs are stored in a process
global storage, that can be accessed from all workers, and the module
loader. Blob URLs can be created using `URL.createObjectURL` and revoked
using `URL.revokeObjectURL`.
This commit does not add support for `fetch`ing blob URLs. This will be
added in a follow up commit.