WebAssembly modules compiled through `WebAssembly.compile()` and similar
non-streaming APIs don't have a URL associated to them, because they
have been compiled from a buffer source. In stack traces, V8 will use
a URL such as `wasm://wasm/d1c677ea`, with a hash of the module.
However, wasm modules compiled through streaming APIs, like
`WebAssembly.compileStreaming()`, do have a known URL, which can be
obtained from the `Response` object passed into the streaming APIs. And
as per the developer-facing display conventions in the WebAssembly
Web API spec, this URL should be used in stack traces. This change
implements that.
This panic could happen in the following cases:
- A non-fatal error being thrown from a worker, that doesn't terminate
the worker's execution, but propagates to the main thread without
being handled, and makes the main thread terminate.
- A nested worker being alive while its parent worker gets terminated.
- A race condition if the main event loop terminates the worker as part
of its last task, but the worker doesn't fully terminate before the
main event loop stops running.
This panic happens because a worker's event loop should have pending ops
as long as the worker isn't closed or terminated – but if an event loop
finishes running while it has living workers, its associated
`WorkerThread` structs will be dropped, closing the channels that keep
those ops pending.
This change adds a `Drop` implementation to `WorkerThread`, which
terminates the worker without waiting for a response. This fixes the
panic, and makes it so nested workers are automatically terminated once
any of their ancestors is closed or terminated.
This change also refactors a worker's termination code into a
`WorkerThread::terminate()` method.
Closes #11342.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
When `worker.terminate()` is called, the spec requires that the
corresponding port message queue is emptied, so no messages can be
received after the call, even if they were sent from the worker before
it was terminated.
The spec doesn't require this of `self.close()`, and since Deno uses
different channels to send messages and to notify that the worker was
closed, messages might still arrive after the worker is known to be
closed, which are currently being dropped. This change fixes that.
The fix involves two parts: one on the JS side and one on the Rust side.
The JS side was using the `#terminated` flag to keep track of whether
the worker is known to be closed, without distinguishing whether further
messages should be dropped or not. This PR changes that flag to an
enum `#state`, which can be one of `"RUNNING"`, `"CLOSED"` or
`"TERMINATED"`.
The Rust side was removing the `WorkerThread` struct from the workers
table when a close control was received, regardless of whether there
were any messages left to read, which made any subsequent calls to
`op_host_recv_message` to return `Ok(None)`, as if there were no more
mesasges. This change instead waits for both a close control and for
the message channel's sender to be closed before the worker thread is
removed from the table.
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.