Closes #11882
BREAKING CHANGE: Previously when `--location` was set, the unique storage key was derived from the the URL of the location instead of just the origin. This change correctly uses just the origin. This may cause previously persisted storage to change its key and data to not be available with the same location as before.
Returns empty values in case of errors, source lines are non-essential anyway. These errors can happen e.g. when source files change at runtime. A warning is also printed to help us track when it happens in unexpected cases besides this.
`Window`'s `self` property and `DedicatedWorkerGlobalScope`'s `name`
property are defined as Web IDL read-only attributes with the
`[Replaceable]` extended attribute, meaning that their setter will
redefine the property as a data property with the set value, rather than
changing some internal state. Deno currently defines them as read-only
data properties instead.
Given that Web IDL requires all attributes to be accessor properties
rather than data properties, but Deno exposes almost all of those
properties as either read-only or writable data properties, it makes
sense to expose `[Replaceable]` properties as writable as well – as is
already the case with `WindowOrWorkerGlobalScope`'s `performance`
property.
WebAssembly modules compiled through `WebAssembly.compile()` and similar
non-streaming APIs don't have a URL associated to them, because they
have been compiled from a buffer source. In stack traces, V8 will use
a URL such as `wasm://wasm/d1c677ea`, with a hash of the module.
However, wasm modules compiled through streaming APIs, like
`WebAssembly.compileStreaming()`, do have a known URL, which can be
obtained from the `Response` object passed into the streaming APIs. And
as per the developer-facing display conventions in the WebAssembly
Web API spec, this URL should be used in stack traces. This change
implements that.
This panic could happen in the following cases:
- A non-fatal error being thrown from a worker, that doesn't terminate
the worker's execution, but propagates to the main thread without
being handled, and makes the main thread terminate.
- A nested worker being alive while its parent worker gets terminated.
- A race condition if the main event loop terminates the worker as part
of its last task, but the worker doesn't fully terminate before the
main event loop stops running.
This panic happens because a worker's event loop should have pending ops
as long as the worker isn't closed or terminated – but if an event loop
finishes running while it has living workers, its associated
`WorkerThread` structs will be dropped, closing the channels that keep
those ops pending.
This change adds a `Drop` implementation to `WorkerThread`, which
terminates the worker without waiting for a response. This fixes the
panic, and makes it so nested workers are automatically terminated once
any of their ancestors is closed or terminated.
This change also refactors a worker's termination code into a
`WorkerThread::terminate()` method.
Closes #11342.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When `worker.terminate()` is called, the spec requires that the
corresponding port message queue is emptied, so no messages can be
received after the call, even if they were sent from the worker before
it was terminated.
The spec doesn't require this of `self.close()`, and since Deno uses
different channels to send messages and to notify that the worker was
closed, messages might still arrive after the worker is known to be
closed, which are currently being dropped. This change fixes that.
The fix involves two parts: one on the JS side and one on the Rust side.
The JS side was using the `#terminated` flag to keep track of whether
the worker is known to be closed, without distinguishing whether further
messages should be dropped or not. This PR changes that flag to an
enum `#state`, which can be one of `"RUNNING"`, `"CLOSED"` or
`"TERMINATED"`.
The Rust side was removing the `WorkerThread` struct from the workers
table when a close control was received, regardless of whether there
were any messages left to read, which made any subsequent calls to
`op_host_recv_message` to return `Ok(None)`, as if there were no more
mesasges. This change instead waits for both a close control and for
the message channel's sender to be closed before the worker thread is
removed from the table.
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.