The initial implementation of `importScripts()` in #11338 used
`reqwest`'s default client to fetch HTTP scripts, which meant it would
not use certificates or other fetching configuration passed by command
line flags. This change fixes it.
A bug was fixed that could cause a hang when a method was
called on a TlsConn object that had thrown an exception earlier.
Additionally, a bug was fixed that caused TlsConn.write() to not
completely flush large buffers (>64kB) to the socket.
The public `TlsConn.handshake()` API is scheduled for inclusion in the
next minor release. See https://github.com/denoland/deno/pull/12467.
This commit annotates errors returned from FS Deno APIs to include
paths that were passed to the API calls.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This panic could happen in the following cases:
- A non-fatal error being thrown from a worker, that doesn't terminate
the worker's execution, but propagates to the main thread without
being handled, and makes the main thread terminate.
- A nested worker being alive while its parent worker gets terminated.
- A race condition if the main event loop terminates the worker as part
of its last task, but the worker doesn't fully terminate before the
main event loop stops running.
This panic happens because a worker's event loop should have pending ops
as long as the worker isn't closed or terminated – but if an event loop
finishes running while it has living workers, its associated
`WorkerThread` structs will be dropped, closing the channels that keep
those ops pending.
This change adds a `Drop` implementation to `WorkerThread`, which
terminates the worker without waiting for a response. This fixes the
panic, and makes it so nested workers are automatically terminated once
any of their ancestors is closed or terminated.
This change also refactors a worker's termination code into a
`WorkerThread::terminate()` method.
Closes #11342.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When `worker.terminate()` is called, the spec requires that the
corresponding port message queue is emptied, so no messages can be
received after the call, even if they were sent from the worker before
it was terminated.
The spec doesn't require this of `self.close()`, and since Deno uses
different channels to send messages and to notify that the worker was
closed, messages might still arrive after the worker is known to be
closed, which are currently being dropped. This change fixes that.
The fix involves two parts: one on the JS side and one on the Rust side.
The JS side was using the `#terminated` flag to keep track of whether
the worker is known to be closed, without distinguishing whether further
messages should be dropped or not. This PR changes that flag to an
enum `#state`, which can be one of `"RUNNING"`, `"CLOSED"` or
`"TERMINATED"`.
The Rust side was removing the `WorkerThread` struct from the workers
table when a close control was received, regardless of whether there
were any messages left to read, which made any subsequent calls to
`op_host_recv_message` to return `Ok(None)`, as if there were no more
mesasges. This change instead waits for both a close control and for
the message channel's sender to be closed before the worker thread is
removed from the table.
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
* refactor(ops): return BadResource errors in ResourceTable calls
Instead of relying on callers to map Options to Results via `.ok_or_else(bad_resource_id)` at over 176 different call sites ...
This commit removes implementation of "native plugins" and replaces
it with FFI API.
Effectively "Deno.openPlugin" API was replaced with "Deno.dlopen" API.
This commits moves implementation of net related APIs available on "Deno"
namespace to "deno_net" extension.
Following APIs were moved:
- Deno.listen()
- Deno.connect()
- Deno.listenTls()
- Deno.serveHttp()
- Deno.shutdown()
- Deno.resolveDns()
- Deno.listenDatagram()
- Deno.startTls()
- Deno.Conn
- Deno.Listener
- Deno.DatagramConn
This commit changes "op_http_response_write" to first send response chunk
and then poll the underlying HTTP connection.
Previously after writing a chunk of response HTTP connection wasn't polled
and thus data wasn't written to the socket until after next op interacting
with the connection.
Waiting on next request in Deno.serveHttp() API hanged
when responses were using ReadableStream. This was caused
by op_http_request_next op that was never woken after
response was fully written. This commit adds waker field to
DenoService which is called after response is finished.
This commit adds "CancelHandle" to "ConnResource" and changes
"op_http_next_request" to await for the cancel signal. In turn
when async iterating over "Deno.HttpConn" the iterator breaks
upon closing of the resource.
In #9118, TLS streams were split into a "read half" and a "write half"
using tokio::io::split() to allow concurrent Conn#read() and
Conn#write() calls without one blocking the other. However, this
introduced a bug: outgoing data gets discarded when the TLS stream is
gracefully closed, because the read half is closed too early, before all
TLS control data has been received.
Fixes: #9692
Fixes: #10049
Fixes: #10296
Fixes: denoland/deno_std#750
Extensions allow declarative extensions to "JsRuntime" (ops, state, JS or middleware).
This allows for:
- `op_crates` to be plug-and-play & self-contained, reducing complexity leaked to consumers
- op middleware (like metrics_op) to be opt-in and for new middleware (unstable, tracing,...)
- `MainWorker` and `WebWorker` to be composable, allowing users to extend workers with their ops whilst benefiting from the other infrastructure (inspector, etc...)
In short extensions improve deno's modularity, reducing complexity and leaky abstractions for embedders and the internal codebase.
`InvalidDNSNameError` is thrown when a string is not a valid hostname,
e.g. it contains invalid characters, or starts with a numeric digit. It
does not involve a (failed) DNS lookup.
This commits adds adds "permissions" option to the test definitions
which allows tests to run with different permission sets than
the process's permission.
The change will only be in effect within the test function, once the
test has completed the original process permission set is restored.
Test permissions cannot exceed the process's permission.
You can only narrow or drop permissions, failure to acquire a
permission results in an error being thrown and the test case will fail.
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit fixes the URL returned from `request.url` in the HTTP server
to be fully qualified. This previously existed, but was removed and
accidentially not readded during optimizations of the HTTP ops.
Returning a non fully qualified URL from `Request#url` is not spec
compliant.
This stabilizes Deno.ftruncate and Deno.ftruncateSync.
This is a well known system call and the interface is
not going to change. Implicitly requires write permissions
as the file has to be opened with write to be truncated.
This commit adds allowlist support to `--allow-run` flag.
Additionally `Deno.permissions.query()` allows to query for specific
programs within allowlist.
This commit adds blob URL support. Blob URLs are stored in a process
global storage, that can be accessed from all workers, and the module
loader. Blob URLs can be created using `URL.createObjectURL` and revoked
using `URL.revokeObjectURL`.
This commit does not add support for `fetch`ing blob URLs. This will be
added in a follow up commit.
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit moves implementation of bin ops to "deno_core" crates
as well as unifying logic between bin ops and json ops to reuse
as much code as possible (both in Rust and JavaScript).
This commit rewrites "dispatch_minimal" into "dispatch_buffer".
It's part of an effort to unify JS interface for ops for both json
and minimal (buffer) ops.
Before this commit "minimal ops" could be either sync or async
depending on the return type from the op, but this commit changes
it to have separate signatures for sync and async ops (just like
in case of json ops).
This commit starts splitting out the deno_web op crate into multiple
smaller crates. This commit splits out WebIDL and URL API, but in the
future I want to split out each spec into its own crate. That means we
will have (in rough order of loading): `webidl`, `dom`, `streams`,
`console`, `encoding`, `url`, `file`, `fetch`, `websocket`, and
`webgpu` crates.
This commit makes "Deno.link" and "Deno.linkSync" stable.
The permission required has been changed to read-write to
ensure one cannot escape the sandbox.