Relanding #12994
This commit adds support for "unhandledrejection" event.
This event will trigger event listeners registered using:
"globalThis.addEventListener("unhandledrejection")
"globalThis.onunhandledrejection"
This is done by registering a default handler using
"Deno.core.setPromiseRejectCallback" that allows to
handle rejected promises in JavaScript instead of Rust.
This commit will make it possible to polyfill
"process.on("unhandledRejection")" in the Node compat
layer.
Co-authored-by: Colin Ihrig <cjihrig@gmail.com>
This commit adds support for "unhandledrejection" event.
This event will trigger event listeners registered using:
"globalThis.addEventListener("unhandledrejection")
"globalThis.onunhandledrejection"
This is done by registering a default handler using
"Deno.core.setPromiseRejectCallback" that allows to
handle rejected promises in JavaScript instead of Rust.
This commit will make it possible to polyfill
"process.on("unhandledRejection")" in the Node compat
layer.
Co-authored-by: Colin Ihrig <cjihrig@gmail.com>
This commit adds brand checking to EventTarget. It also fixes a
bug where deno would crash if an abort signal was aborted on the
global addEventListener().
- Introduced optional callback for Deno.core.serialize API, that returns
cloning error if there is one.
- Removed try/catch in seralize structured clone function and throw error from
callback.
- Removed "Object with a getter that throws" assertion from WPT.
This commit fixes a failing WPT test by making EventTarget's
addEventListener() method throw if both the listener and the
signal option are null.
Fixes: https://github.com/denoland/deno/issues/14593
This commit adds proper support for import assertions and JSON modules.
Implementation of "core/modules.rs" was changed to account for multiple possible
module types, instead of always assuming that the code is an "ES module". In
effect "ModuleMap" now has knowledge about each modules' type (stored via
"ModuleType" enum). Module loading pipeline now stores information about
expected module type for each request and validates that expected type matches
discovered module type based on file's "MediaType".
Relevant tests were added to "core/modules.rs" and integration tests,
additionally multiple WPT tests were enabled.
There are still some rough edges in the implementation and not all WPT were
enabled, due to:
a) unclear BOM handling in source code by "FileFetcher"
b) design limitation of Deno's "FileFetcher" that doesn't download the same
module multiple times in a single run
Co-authored-by: Kitson Kelly <me@kitsonkelly.com>
This commit adds support for exporting RSA JWKs in the Web Crypto API.
It also does some minor fixes for RSA JWK imports.
Co-authored-by: Sean Michael Wykes <sean.wykes@nascent.com.br>
The Web IDL conversion to `BufferSource` and similar types shouldn't
check whether the buffer is detached.
In the case of `TextDecoder`, our implementation would still throw after
the Web IDL conversions because we're creating a new `Uint8Array` from
the buffer source's buffer, which throws if it's detached. This change
also fixes this bug.
The initial implementation of `importScripts()` in #11338 used
`reqwest`'s default client to fetch HTTP scripts, which meant it would
not use certificates or other fetching configuration passed by command
line flags. This change fixes it.
* 'request-upload.h2' and 'redirect-upload.h2' only work with a
functional HTTP2 test harness server, otherwise they're flaky.
* Fetch request streaming tests require a server that doesn't choke
on requests that use 'Transfer-Encoding: chunked'.
`Window`'s `self` property and `DedicatedWorkerGlobalScope`'s `name`
property are defined as Web IDL read-only attributes with the
`[Replaceable]` extended attribute, meaning that their setter will
redefine the property as a data property with the set value, rather than
changing some internal state. Deno currently defines them as read-only
data properties instead.
Given that Web IDL requires all attributes to be accessor properties
rather than data properties, but Deno exposes almost all of those
properties as either read-only or writable data properties, it makes
sense to expose `[Replaceable]` properties as writable as well – as is
already the case with `WindowOrWorkerGlobalScope`'s `performance`
property.
Classic workers were implemented in denoland#11338, which also enabled the WPT
tests in the `workers` directory. However, the rest of WPT worker tests
were not enabled because a number of them were hanging due to
web-platform-tests/wpt#29777. Now that that WPT issue is fixed, the bulk
of worker tests can be enabled.
There are still a few tests that hang, and so haven't been enabled. In
particular:
- The following tests seem to hang because a promise fails to resolve.
We can detect such cases in non-worker tests because the process will
exit without calling the WPT completion callback, but in worker tests
the worker message ops will keep the event loop running. This will be
fixed when we add timeouts to WPT tests (denoland#9460).
- `/fetch/api/basic/error-after-response.any.worker.html`
- `/html/webappapis/microtask-queuing/queue-microtask-exceptions.any.worker.html`
- `/webmessaging/message-channels/worker-post-after-close.any.worker.html`
- `/webmessaging/message-channels/worker.any.worker.html`
- `/websockets/Create-on-worker-shutdown.any.worker.html`
- The following tests apparently hang because a promise rejection is
never handled, which will kill the process in the main thread but not
in workers (denoland#12221).
- `/streams/readable-streams/async-iterator.any.worker.html`
- `/workers/interfaces/WorkerUtils/importScripts/report-error-setTimeout-cross-origin.sub.any.worker.html`
- `/workers/interfaces/WorkerUtils/importScripts/report-error-setTimeout-redirect-to-cross-origin.sub.any.worker.html`
- `/workers/interfaces/WorkerUtils/importScripts/report-error-setTimeout-same-origin.sub.any.worker.html`
Enable deno devs to bench/profile/test JS code changes without doing a full --release rebuild.
Incremental release builds take ~4mn on M1s, often more on other machines ...
In the spec, a URL record has an associated "blob URL entry", which for
`blob:` URLs is populated during parsing to contain a reference to the
`Blob` object that backs that object URL. It is this blob URL entry that
the `fetch` API uses to resolve an object URL.
Therefore, since the `Request` constructor parses URL inputs, it will
have an associated blob URL entry which will be used when fetching, even
if the object URL has been revoked since the construction of the
`Request` object. (The `Request` constructor takes the URL as a string
and parses it, so the object URL must be live at the time it is called.)
This PR adds a new `blobFromObjectUrl` JS function (backed by a new
`op_blob_from_object_url` op) that, if the URL is a valid object URL,
returns a new `Blob` object whose parts are references to the same Rust
`BlobPart`s used by the original `Blob` object. It uses this function to
add a new `blobUrlEntry` field to inner requests, which will be `null`
or such a `Blob`, and then uses `Blob.prototype.stream()` as the
response's body. As a result of this, the `blob:` URL resolution from
`op_fetch` is now useless, and has been removed.
This adds support for the URLPattern API.
The API is added in --unstable only, as it has not yet shipped in any
browser. It is targeted for shipping in Chrome 95.
Spec: https://wicg.github.io/urlpattern/
Co-authored-by: crowlKats < crowlkats@toaxl.com >
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
Currently, calling the `abort()` method on a `FileReader` object aborts
any current read operation, but it also prevents any read operation
started at some later point from starting. The File API instead
specifies that calling `abort()` should reset the `FileReader`'s state
and result, as well as removing any queued tasks from the current
operation that haven't yet run.
Currently, a WPT test is considered failed if its status code is
anything other than 0, regardless of whether the test suite completed
running or not, and any subtests that haven't finished running are not
considered to be failures.
But a test can exit with a zero status code before it has completed
running, if the event loop has run out of tasks because of a bug in one
of the ops, leading to false positives. This change fixes that.
The WebAssembly streaming APIs used to be enabled, but used to take
buffer sources as their first argument (see #6154 and #7259). This
change re-enables them, requiring a Promise<Response> instead, as well as
enabling asynchronous compilation of WebAssembly modules.
Additionally, if the existing `Request`'s body is disturbed, the Request creation
should fail.
This change also updates the step numbers in the Request constructor to match
whatwg/fetch#1249.
This adds a daily scheduled CI pipeline that runs WPT tests against
the most recent epochs/daily every night. Results are uploaded to
wpt.fyi.
WPTs are run on all supported platforms, on both stable and canary.
These reports can be consumed by tools like `wptreport` or
https://wpt.fyi. The old style report could be removed in a future PR
when wpt.deno.land is updated.
This commit removes all JS based text encoding / text decoding. Instead
encoding now happens in Rust via encoding_rs (already in tree). This
implementation retains stream support, but adds the last missing
encodings. We are incredibly close to 100% WPT on text encoding now.
This should reduce our baseline heap by quite a bit.
Replaces the file-backed provider by an in-memory one because proper
file locking is a hard problem that detracts from the proof of concept.
Teach the WPT runner how to extract tests from .html files because all
the relevant tests in test_util/wpt/webmessaging/broadcastchannel are
inside basics.html and interface.html.
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit aligns `Headers` to spec. It also removes the now unused
03_dom_iterable.js file. We now pass all relevant `Headers` WPT. We do
not implement any sort of header filtering, as we are a server side
runtime.
This is likely not the most efficient implementation of `Headers` yet.
It is however spec compliant. Once all the APIs in the `HTTP` hot loop
are correct we can start optimizing them. It is likely that this commit
reduces bench throughput temporarily.
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit rewrites scripts in "tools/" directory
to use Deno instead of Python. In return it allows
to remove huge number of Python packages in "third_party/".
All benchmarks are done in Rust and can be invoked with
`cargo bench`.
Currently this has it's own "harness" that behaves like
`./tools/benchmark.py` did.
Because of this tests inside `cli/bench` are currently not run.
This should be switched to the language provided harness
once the `#[bench]` attribute has been stabilized.
This PR is intentionally ugly. It duplicates all of the code in cli/js2/ into
cli/tsc/ ... because it's very important that we all understand that this code
is unnecessarily duplicated in our binary. I hope this ugliness provides the
motivation to clean it up.
The typescript git submodule is removed, because it's a very large repo and
contains all sorts of stuff we don't need. Instead the necessary files are
copied directly into the deno repo. Hence +200k lines.
COMPILER_SNAPSHOT.bin size
```
master 3448139
this branch 3320972
```
Fixes #6812
This commit adds a "--no-check" option to following subcommands:
- "deno cache"
- "deno info"
- "deno run"
- "deno test"
The "--no-check" options allows to skip type checking step and instead
directly transpiles TS sources to JS sources.
This solution uses `ts.transpileModule()` API and is just an interim
solution before implementing it fully in Rust.
This commit changes how error occurring in SWC are handled.
Changed lexer settings to properly handle TS decorators.
Changed output of SWC error to annotate with position in file.
This commit fixes a bug introduced in #5029 that caused bad
handling of redirects during module analysis.
Also ensured that duplicate modules are not downloaded.
This PR removes the hack in CLI that allows to run scripts with shorthand: deno script.ts.
Removing this functionality because it hacks around short-comings of clap our CLI parser. We agree that this shorthand syntax is desirable, but it needs to be rethinked and reimplemented. For 1.0 we should go with conservative approach that is correct.
- Removes the __fetch namespace from `deno types`
- Response.redirect should be a static.
- Response.body should not be AsyncIterable.
- Disables the deno_proxy benchmark
- Makes std/examples/curl.ts buffer the body before printing to stdout
* Remove DENO_BUILD_MODE and DENO_BUILD_PATH
Also remove outdated docs related to ninja/gn.
* fix
* remove parameter to build_mode()
* remove arg parsing from benchmark.py
* add tests for "Deno.core.encode" and "Deno.core.decode" for empty inputs
* use "Deno.core.encode" in "TextEncoder"
* use "Deno.core.decode" in "TextDecoder"
* remove "core_decode" and "core_encode" benchmarks
This commits add two new methods to "Deno.core" namespace: "encode" and "decode".
Those methods are bound in Rust to provide a) fast b) generally available of encoding and decoding UTF-8 strings.
Both methods are now used in "cli/js/dispatch_json.ts".
Rewrites "cli/js/unit_test_runner.ts" to communicate with spawned subprocesses
using TCP socket.
* Rewrite "Deno.runTests()" by factoring out testing logic to private "TestApi"
class. "TestApi" implements "AsyncIterator" that yields "TestEvent"s,
which is an interface for different types of event occuring during running
tests.
* Add "reporter" argument to "Deno.runTests()" to allow users to provide custom
reporting mechanism for tests. It's represented by "TestReporter" interface,
that implements hook functions for each type of "TestEvent". If "reporter"
is not provided then default console reporting is used (via
"ConsoleReporter").
* Change how "unit_test_runner" communicates with spawned suprocesses. Instead
of parsing text data from child's stdout, a TCP socket is created and used
for communication. "unit_test_runner" can run in either "master" or "worker"
mode. Former is responsible for test discovery and establishing needed
permission combinations; while latter (that is spawned by "master") executes
tests that match given permission set.
* Use "SocketReporter" that implements "TestReporter" interface to send output
of tests to "master" process. Data is sent as stringified JSON and then
parsed by "master" as structured data. "master" applies it's own reporting
logic to output tests to console (by reusing default "ConsoleReporter").
This change simplifies how we execute V8. Previously V8 Isolates jumped
around threads every time they were woken up. This was overly complex and
potentially hurting performance in a myriad ways. Now isolates run on
their own dedicated thread and never move.
- blocking_json spawns a thread and does not use a thread pool
- op_host_poll_worker and op_host_resume_worker are non-operational
- removes Worker::get_message and Worker::post_message
- ThreadSafeState::workers table contains WorkerChannel entries instead
of actual Worker instances.
- MainWorker and CompilerWorker are no longer Futures.
- The multi-threaded version of deno_core_http_bench was removed.
- AyncOps no longer need to be Send + Sync
This PR is very large and several tests were disabled to speed
integration:
- installer_test_local_module_run
- installer_test_remote_module_run
- _015_duplicate_parallel_import
- _026_workers
This flag was added to evaluate performance relative to tokio's threaded
runtime. Although it's faster in the HTTP benchmark, it's clear the runtime
is not the only perf problem.
Removing this flag will simplify further refactors, in particular
adopting the #[tokio::main] macro. This will be done in a follow up.
Ultimately we expect to move to the current thread runtime with Isolates
pinned to specific threads, but that will be a much larger refactor. The
--current-thread just complicates that effort.