Fixes "op_set_exit_code" by sharing a single "Arc" between
all workers (via "op state") instead of having a "global" value stored in
"deno_runtime" crate. As a consequence setting an exit code is always
scoped to a tree of workers, instead of being overridable if there are
multiple worker tree (like in "deno test --jobs" subcommand).
Refactored "cli/main.rs" functions to return "Result<i32, AnyError>" instead
of "Result<(), AnyError>" so they can return exit code.
Set the exit code to use if none is provided to Deno.exit(), or when
Deno exits naturally.
Needed for process.exitCode Node compat. Paves the way for #12888.
This commit adds an ability to "ref" or "unref" pending ops.
Up to this point Deno had a notion of "async ops" and "unref async ops";
the former keep event loop alive, while the latter do not block event loop
from finishing. It was not possible to change between op types after
dispatching, one had to decide which type to use before dispatch.
Instead of storing ops in two separate "FuturesUnordered" collections,
now ops are stored in a single collection, with supplemental "HashSet"
storing ids of promises that were "unrefed".
Two APIs were added to "Deno.core":
"Deno.core.refOp(promiseId)" which allows to mark promise id
to be "refed" and keep event loop alive (the default behavior)
"Deno.core.unrefOp(promiseId)" which allows to mark promise
id as "unrefed" which won't block event loop from exiting
This allows resources to be "streams" by implementing read/write/shutdown. These streams are implicit since their nature (read/write/duplex) isn't known until called, but we could easily add another method to explicitly tag resources as streams.
`op_read/op_write/op_shutdown` are now builtin ops provided by `deno_core`
Note: this current implementation is simple & straightforward but it results in an additional alloc per read/write call
Closes #12556
The initial implementation of `importScripts()` in #11338 used
`reqwest`'s default client to fetch HTTP scripts, which meant it would
not use certificates or other fetching configuration passed by command
line flags. This change fixes it.
A bug was fixed that could cause a hang when a method was
called on a TlsConn object that had thrown an exception earlier.
Additionally, a bug was fixed that caused TlsConn.write() to not
completely flush large buffers (>64kB) to the socket.
The public `TlsConn.handshake()` API is scheduled for inclusion in the
next minor release. See https://github.com/denoland/deno/pull/12467.
This commit annotates errors returned from FS Deno APIs to include
paths that were passed to the API calls.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This panic could happen in the following cases:
- A non-fatal error being thrown from a worker, that doesn't terminate
the worker's execution, but propagates to the main thread without
being handled, and makes the main thread terminate.
- A nested worker being alive while its parent worker gets terminated.
- A race condition if the main event loop terminates the worker as part
of its last task, but the worker doesn't fully terminate before the
main event loop stops running.
This panic happens because a worker's event loop should have pending ops
as long as the worker isn't closed or terminated – but if an event loop
finishes running while it has living workers, its associated
`WorkerThread` structs will be dropped, closing the channels that keep
those ops pending.
This change adds a `Drop` implementation to `WorkerThread`, which
terminates the worker without waiting for a response. This fixes the
panic, and makes it so nested workers are automatically terminated once
any of their ancestors is closed or terminated.
This change also refactors a worker's termination code into a
`WorkerThread::terminate()` method.
Closes #11342.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When `worker.terminate()` is called, the spec requires that the
corresponding port message queue is emptied, so no messages can be
received after the call, even if they were sent from the worker before
it was terminated.
The spec doesn't require this of `self.close()`, and since Deno uses
different channels to send messages and to notify that the worker was
closed, messages might still arrive after the worker is known to be
closed, which are currently being dropped. This change fixes that.
The fix involves two parts: one on the JS side and one on the Rust side.
The JS side was using the `#terminated` flag to keep track of whether
the worker is known to be closed, without distinguishing whether further
messages should be dropped or not. This PR changes that flag to an
enum `#state`, which can be one of `"RUNNING"`, `"CLOSED"` or
`"TERMINATED"`.
The Rust side was removing the `WorkerThread` struct from the workers
table when a close control was received, regardless of whether there
were any messages left to read, which made any subsequent calls to
`op_host_recv_message` to return `Ok(None)`, as if there were no more
mesasges. This change instead waits for both a close control and for
the message channel's sender to be closed before the worker thread is
removed from the table.
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
This commit removes implementation of "native plugins" and replaces
it with FFI API.
Effectively "Deno.openPlugin" API was replaced with "Deno.dlopen" API.
This commits moves implementation of net related APIs available on "Deno"
namespace to "deno_net" extension.
Following APIs were moved:
- Deno.listen()
- Deno.connect()
- Deno.listenTls()
- Deno.serveHttp()
- Deno.shutdown()
- Deno.resolveDns()
- Deno.listenDatagram()
- Deno.startTls()
- Deno.Conn
- Deno.Listener
- Deno.DatagramConn
This commit changes "op_http_response_write" to first send response chunk
and then poll the underlying HTTP connection.
Previously after writing a chunk of response HTTP connection wasn't polled
and thus data wasn't written to the socket until after next op interacting
with the connection.
Waiting on next request in Deno.serveHttp() API hanged
when responses were using ReadableStream. This was caused
by op_http_request_next op that was never woken after
response was fully written. This commit adds waker field to
DenoService which is called after response is finished.
This commit adds "CancelHandle" to "ConnResource" and changes
"op_http_next_request" to await for the cancel signal. In turn
when async iterating over "Deno.HttpConn" the iterator breaks
upon closing of the resource.
In #9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.
Fixes: #9692
Fixes: #10049
Fixes: #10296
Fixes: denoland/deno_std#750
Extensions allow declarative extensions to "JsRuntime" (ops, state, JS or middleware).
This allows for:
- `op_crates` to be plug-and-play & self-contained, reducing complexity leaked to consumers
- op middleware (like metrics_op) to be opt-in and for new middleware (unstable, tracing,...)
- `MainWorker` and `WebWorker` to be composable, allowing users to extend workers with their ops whilst benefiting from the other infrastructure (inspector, etc...)
In short extensions improve deno's modularity, reducing complexity and leaky abstractions for embedders and the internal codebase.
`InvalidDNSNameError` is thrown when a string is not a valid hostname,
e.g. it contains invalid characters, or starts with a numeric digit. It
does not involve a (failed) DNS lookup.
This commits adds adds "permissions" option to the test definitions
which allows tests to run with different permission sets than
the process's permission.
The change will only be in effect within the test function, once the
test has completed the original process permission set is restored.
Test permissions cannot exceed the process's permission.
You can only narrow or drop permissions, failure to acquire a
permission results in an error being thrown and the test case will fail.
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit fixes the URL returned from `request.url` in the HTTP server
to be fully qualified. This previously existed, but was removed and
accidentially not readded during optimizations of the HTTP ops.
Returning a non fully qualified URL from `Request#url` is not spec
compliant.
This stabilizes Deno.ftruncate and Deno.ftruncateSync.
This is a well known system call and the interface is
not going to change. Implicitly requires write permissions
as the file has to be opened with write to be truncated.
This commit adds allowlist support to `--allow-run` flag.
Additionally `Deno.permissions.query()` allows to query for specific
programs within allowlist.
This commit adds blob URL support. Blob URLs are stored in a process
global storage, that can be accessed from all workers, and the module
loader. Blob URLs can be created using `URL.createObjectURL` and revoked
using `URL.revokeObjectURL`.
This commit does not add support for `fetch`ing blob URLs. This will be
added in a follow up commit.
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit moves implementation of bin ops to "deno_core" crates
as well as unifying logic between bin ops and json ops to reuse
as much code as possible (both in Rust and JavaScript).
This commit rewrites "dispatch_minimal" into "dispatch_buffer".
It's part of an effort to unify JS interface for ops for both json
and minimal (buffer) ops.
Before this commit "minimal ops" could be either sync or async
depending on the return type from the op, but this commit changes
it to have separate signatures for sync and async ops (just like
in case of json ops).
This commit starts splitting out the deno_web op crate into multiple
smaller crates. This commit splits out WebIDL and URL API, but in the
future I want to split out each spec into its own crate. That means we
will have (in rough order of loading): `webidl`, `dom`, `streams`,
`console`, `encoding`, `url`, `file`, `fetch`, `websocket`, and
`webgpu` crates.
This commit makes "Deno.link" and "Deno.linkSync" stable.
The permission required has been changed to read-write to
ensure one cannot escape the sandbox.
Previously, calling `Process#kill()` after the process had exited would
sometimes throw a `TypeError` on Windows. After this patch, it will
throw `NotFound` instead, just like other platforms.
This patch also fixes flakiness of the `runKillAfterStatus` test on
Windows.
This commit adds new option to "Worker" Web API that allows to
configure permissions.
New "Worker.deno.permissions" option can be used to define limited
permissions to the worker thread by either:
- inherit set of parent thread permissions
- use limited subset of parent thread permissions
- revoke all permissions (full sandbox)
In order to achieve this functionality "CliModuleLoader"
was modified to accept "initial permissions", which are used
for top module loading (ie. uses parent thread permission set
to load top level module of a worker).
Allowlist checking already uses hosts but for some reason
requests, revokes and the runtime permissions API use URLs.
- BREAKING(lib.deno.unstable.d.ts): Change
NetPermissionDescriptor::url to NetPermissionDescriptor::host
- fix(runtime/permissions): Don't add whole URLs to the
allowlist on request
- fix(runtime/permissions): Harden strength semantics:
({ name: "net", host: "127.0.0.1" } is stronger than
{ name: "net", host: "127.0.0.1:8000" }) for blocklisting
- refactor(runtime/permissions): Use tuples for hosts, make
the host optional in Permissions::{query_net, request_net, revoke_net}()
This commit migrates all ops to use new resource table
and "AsyncRefCell".
Old implementation of resource table was completely
removed and all code referencing it was updated to use
new system.
This commit adds a new function that is an asynchronous version of
`resolve_addr` using `tokio::net::lookup_host`, and accordingly, renames
the synchronous version to `resolve_addr_sync`.
This allows async ops to resolve hosts with non-blocking.
This commit moves Deno JS runtime, ops, permissions and
inspector implementation to new "deno_runtime" crate located
in "runtime/" directory.
Details in "runtime/README.md".
Co-authored-by: Ryan Dahl <ry@tinyclouds.org>