This commit removes three unstable Deno APIs:
- "Deno.spawn()"
- "Deno.spawnSync()"
- "Deno.spawnChild()"
These APIs were replaced by a unified "Deno.Command" API.
Uses SeqOneByteString optimization to do zero-copy `&str` arguments in
fast calls.
- [x] Depends on https://github.com/denoland/rusty_v8/pull/1129
- [x] Depends on
https://chromium-review.googlesource.com/c/v8/v8/+/4036884
- [x] Disable in async ops
- [x] Make it work with owned `String` with an extra alloc in fast path.
- [x] Support `Cow<'_, str>`. Owned for slow case, Borrowed for fast
case
```rust
#[op]
fn op_string_len(s: &str) -> u32 {
str.len() as u32
}
```
This PR adds copies of several unstable APIs that are available
in "Deno[Deno.internal].nodeUnstable" namespace.
These copies do not perform unstable check (ie. don't require
"--unstable" flag to be present). Otherwise they work exactly
the same, including permission checks.
These APIs are not meant to be used by users directly and
can change at any time.
Copies of following APIs are available in that namespace:
- Deno.spawnChild
- Deno.spawn
- Deno.spawnSync
- Deno.serve
- Deno.upgradeHttpRaw
- Deno.listenDatagram
This commit stabilizes "Deno.consoleSize()" API.
There is one change compared to previous unstable API,
in that the API doesn't accept any arguments. Console size
is established by querying syscalls for stdio streams at fd
0, 1 and 2.
This commit removes the calls to `expect()` on `std::rc::Rc`, which caused
Deno to panic under certain situations. We now return an error if `Rc`
is referenced by other variables.
Fixes #9360
Fixes #13345
Fixes #13926
Fixes #16241
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This change adds `windowsRawArguments` to `SpawnOptions`. The option enables
skipping the default quoting and escaping while creating the command on
windows.
The option works in a similar way as `windowsVerbatimArguments` in
child_process.spawn options in Node.js, and is necessary for simulating
it in `std/node`.
closes #8852
This commit introduces two new buffer wrapper types to `deno_core`. The
main benefit of these new wrappers is that they can wrap a number of
different underlying buffer types. This allows for a more flexible read
and write API on resources that will require less copying of data
between different buffer representations.
- `BufView` is a read-only view onto a buffer. It can be backed by
`ZeroCopyBuf`, `Vec<u8>`, and `bytes::Bytes`.
- `BufViewMut` is a read-write view onto a buffer. It can be cheaply
converted into a `BufView`. It can be backed by `ZeroCopyBuf` or
`Vec<u8>`.
Both new buffer views have a cursor. This means that the start point of
the view can be constrained to write / read from just a slice of the
view. Only the start point of the slice can be adjusted. The end point
is fixed. To adjust the end point, the underlying buffer needs to be
truncated.
Readable resources have been changed to better cater to resources that
do not support BYOB reads. The basic `read` method now returns a
`BufView` instead of taking a `ZeroCopyBuf` to fill. This allows the
operation to return buffers that the resource has already allocated,
instead of forcing the caller to allocate the buffer. BYOB reads are
still very useful for resources that support them, so a new `read_byob`
method has been added that takes a `BufViewMut` to fill. `op_read`
attempts to use `read_byob` if the resource supports it, which falls
back to `read` and performs an additional copy if it does not. For
Rust->JS reads this change should have no impact, but for Rust->Rust
reads, this allows the caller to avoid an additional copy in many
scenarios. This combined with the support for `BufView` to be backed by
`bytes::Bytes` allows us to avoid one data copy when piping from a
`fetch` response into an `ext/http` response.
Writable resources have been changed to take a `BufView` instead of a
`ZeroCopyBuf` as an argument. This allows for less copying of data in
certain scenarios, as described above. Additionally a new
`Resource::write_all` method has been added that takes a `BufView` and
continually attempts to write the resource until the entire buffer has
been written. Certain resources like files can override this method to
provide a more efficient `write_all` implementation.
Stop allowing clippy::derive-partial-eq-without-eq and fix warnings
about deriving PartialEq without also deriving Eq.
In one case I removed the PartialEq because it a) wasn't necessary,
and b) sketchy because it was comparing floating point numbers.
IMO, that's a good argument for enforcing the lint rule, because it
would most likely have been caught during review if it had been enabled.
This commit allows the Node compatibility layer to skip
environment variable permission checks when --unstable
is passed and the variable name is one that Node uses.
Fixes: https://github.com/denoland/deno/issues/15890
This commit removes "WorkerOptions.deno" option as a boolean,
as well as "WorkerOptions.deno.namespace" settings. Starting
with this commit all workers have access to "Deno" namespace
by default.
Calling `worker.terminate()` used to kill the worker's isolate and
then block until the worker's thread finished. This blocks the calling
thread if the worker's event loop was blocked in a sync op (as with
`Deno.sleepSync`), which wasn't realized at the time, but since the
worker's isolate was killed at that moment, it would not block the
calling thread if the worker was in a JS endless loop.
However, in #12831, in order to work around a V8 bug, worker
termination was changed to first set a signal to let the worker event
loop know that termination has been requested, and only kill the
isolate if the event loop has not finished after 2 seconds. However,
this change kept the blocking, which meant that JS endless loops in
the worker now blocked the parent for 2 seconds.
As it turns out, after #12831 it is fine to signal termination and
even kill the worker's isolate without waiting for the thread to
finish, so this change does that. However, that might leave the async
ops that receive messages and control data from the worker pending
after `worker.terminate()`, which leads to odd results from the op
sanitizer. Therefore, we set up a `CancelHandler` to cancel those ops
when the worker is terminated.
This commit:
- removes "fmt_errors::PrettyJsError" in favor of "format_js_error" fn
- removes "deno_core::JsError::create" and
"deno_core::RuntimeOptions::js_error_create_fn"
- adds new option to "deno_runtime::ops::worker_host::init"
This commit adds "Deno.upgradeHttp" API, which
allows to "hijack" connection and switch protocols, to eg.
implement WebSocket required for Node compat.
Co-authored-by: crowlkats <crowlkats@toaxl.com>
Co-authored-by: Ryan Dahl <ry@tinyclouds.org>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Adds another callback to WebWorkerOptions that allows to execute
some modules before actual worker code executes. This allows to set up Node
global using std/node.
Add an op to list the network interfaces on the system.
Prep work for #8137 and `os.networkInterfaces()` Node compat in std.
Refs denoland/deno_std#1436.
Fixes "op_set_exit_code" by sharing a single "Arc" between
all workers (via "op state") instead of having a "global" value stored in
"deno_runtime" crate. As a consequence setting an exit code is always
scoped to a tree of workers, instead of being overridable if there are
multiple worker tree (like in "deno test --jobs" subcommand).
Refactored "cli/main.rs" functions to return "Result<i32, AnyError>" instead
of "Result<(), AnyError>" so they can return exit code.
Set the exit code to use if none is provided to Deno.exit(), or when
Deno exits naturally.
Needed for process.exitCode Node compat. Paves the way for #12888.
This commit adds an ability to "ref" or "unref" pending ops.
Up to this point Deno had a notion of "async ops" and "unref async ops";
the former keep event loop alive, while the latter do not block event loop
from finishing. It was not possible to change between op types after
dispatching, one had to decide which type to use before dispatch.
Instead of storing ops in two separate "FuturesUnordered" collections,
now ops are stored in a single collection, with supplemental "HashSet"
storing ids of promises that were "unrefed".
Two APIs were added to "Deno.core":
"Deno.core.refOp(promiseId)" which allows to mark promise id
to be "refed" and keep event loop alive (the default behavior)
"Deno.core.unrefOp(promiseId)" which allows to mark promise
id as "unrefed" which won't block event loop from exiting
This allows resources to be "streams" by implementing read/write/shutdown. These streams are implicit since their nature (read/write/duplex) isn't known until called, but we could easily add another method to explicitly tag resources as streams.
`op_read/op_write/op_shutdown` are now builtin ops provided by `deno_core`
Note: this current implementation is simple & straightforward but it results in an additional alloc per read/write call
Closes #12556
The initial implementation of `importScripts()` in #11338 used
`reqwest`'s default client to fetch HTTP scripts, which meant it would
not use certificates or other fetching configuration passed by command
line flags. This change fixes it.
A bug was fixed that could cause a hang when a method was
called on a TlsConn object that had thrown an exception earlier.
Additionally, a bug was fixed that caused TlsConn.write() to not
completely flush large buffers (>64kB) to the socket.
The public `TlsConn.handshake()` API is scheduled for inclusion in the
next minor release. See https://github.com/denoland/deno/pull/12467.
This commit annotates errors returned from FS Deno APIs to include
paths that were passed to the API calls.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This panic could happen in the following cases:
- A non-fatal error being thrown from a worker, that doesn't terminate
the worker's execution, but propagates to the main thread without
being handled, and makes the main thread terminate.
- A nested worker being alive while its parent worker gets terminated.
- A race condition if the main event loop terminates the worker as part
of its last task, but the worker doesn't fully terminate before the
main event loop stops running.
This panic happens because a worker's event loop should have pending ops
as long as the worker isn't closed or terminated – but if an event loop
finishes running while it has living workers, its associated
`WorkerThread` structs will be dropped, closing the channels that keep
those ops pending.
This change adds a `Drop` implementation to `WorkerThread`, which
terminates the worker without waiting for a response. This fixes the
panic, and makes it so nested workers are automatically terminated once
any of their ancestors is closed or terminated.
This change also refactors a worker's termination code into a
`WorkerThread::terminate()` method.
Closes #11342.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When `worker.terminate()` is called, the spec requires that the
corresponding port message queue is emptied, so no messages can be
received after the call, even if they were sent from the worker before
it was terminated.
The spec doesn't require this of `self.close()`, and since Deno uses
different channels to send messages and to notify that the worker was
closed, messages might still arrive after the worker is known to be
closed, which are currently being dropped. This change fixes that.
The fix involves two parts: one on the JS side and one on the Rust side.
The JS side was using the `#terminated` flag to keep track of whether
the worker is known to be closed, without distinguishing whether further
messages should be dropped or not. This PR changes that flag to an
enum `#state`, which can be one of `"RUNNING"`, `"CLOSED"` or
`"TERMINATED"`.
The Rust side was removing the `WorkerThread` struct from the workers
table when a close control was received, regardless of whether there
were any messages left to read, which made any subsequent calls to
`op_host_recv_message` to return `Ok(None)`, as if there were no more
mesasges. This change instead waits for both a close control and for
the message channel's sender to be closed before the worker thread is
removed from the table.
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
This commit removes implementation of "native plugins" and replaces
it with FFI API.
Effectively "Deno.openPlugin" API was replaced with "Deno.dlopen" API.
This commits moves implementation of net related APIs available on "Deno"
namespace to "deno_net" extension.
Following APIs were moved:
- Deno.listen()
- Deno.connect()
- Deno.listenTls()
- Deno.serveHttp()
- Deno.shutdown()
- Deno.resolveDns()
- Deno.listenDatagram()
- Deno.startTls()
- Deno.Conn
- Deno.Listener
- Deno.DatagramConn
This commit changes "op_http_response_write" to first send response chunk
and then poll the underlying HTTP connection.
Previously after writing a chunk of response HTTP connection wasn't polled
and thus data wasn't written to the socket until after next op interacting
with the connection.
Waiting on next request in Deno.serveHttp() API hanged
when responses were using ReadableStream. This was caused
by op_http_request_next op that was never woken after
response was fully written. This commit adds waker field to
DenoService which is called after response is finished.
This commit adds "CancelHandle" to "ConnResource" and changes
"op_http_next_request" to await for the cancel signal. In turn
when async iterating over "Deno.HttpConn" the iterator breaks
upon closing of the resource.
In #9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.
Fixes: #9692
Fixes: #10049
Fixes: #10296
Fixes: denoland/deno_std#750
Extensions allow declarative extensions to "JsRuntime" (ops, state, JS or middleware).
This allows for:
- `op_crates` to be plug-and-play & self-contained, reducing complexity leaked to consumers
- op middleware (like metrics_op) to be opt-in and for new middleware (unstable, tracing,...)
- `MainWorker` and `WebWorker` to be composable, allowing users to extend workers with their ops whilst benefiting from the other infrastructure (inspector, etc...)
In short extensions improve deno's modularity, reducing complexity and leaky abstractions for embedders and the internal codebase.
`InvalidDNSNameError` is thrown when a string is not a valid hostname,
e.g. it contains invalid characters, or starts with a numeric digit. It
does not involve a (failed) DNS lookup.
This commits adds adds "permissions" option to the test definitions
which allows tests to run with different permission sets than
the process's permission.
The change will only be in effect within the test function, once the
test has completed the original process permission set is restored.
Test permissions cannot exceed the process's permission.
You can only narrow or drop permissions, failure to acquire a
permission results in an error being thrown and the test case will fail.
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit fixes the URL returned from `request.url` in the HTTP server
to be fully qualified. This previously existed, but was removed and
accidentially not readded during optimizations of the HTTP ops.
Returning a non fully qualified URL from `Request#url` is not spec
compliant.
This stabilizes Deno.ftruncate and Deno.ftruncateSync.
This is a well known system call and the interface is
not going to change. Implicitly requires write permissions
as the file has to be opened with write to be truncated.
This commit adds allowlist support to `--allow-run` flag.
Additionally `Deno.permissions.query()` allows to query for specific
programs within allowlist.
This commit adds blob URL support. Blob URLs are stored in a process
global storage, that can be accessed from all workers, and the module
loader. Blob URLs can be created using `URL.createObjectURL` and revoked
using `URL.revokeObjectURL`.
This commit does not add support for `fetch`ing blob URLs. This will be
added in a follow up commit.
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit moves implementation of bin ops to "deno_core" crates
as well as unifying logic between bin ops and json ops to reuse
as much code as possible (both in Rust and JavaScript).
This commit rewrites "dispatch_minimal" into "dispatch_buffer".
It's part of an effort to unify JS interface for ops for both json
and minimal (buffer) ops.
Before this commit "minimal ops" could be either sync or async
depending on the return type from the op, but this commit changes
it to have separate signatures for sync and async ops (just like
in case of json ops).