In the spec, a URL record has an associated "blob URL entry", which for
`blob:` URLs is populated during parsing to contain a reference to the
`Blob` object that backs that object URL. It is this blob URL entry that
the `fetch` API uses to resolve an object URL.
Therefore, since the `Request` constructor parses URL inputs, it will
have an associated blob URL entry which will be used when fetching, even
if the object URL has been revoked since the construction of the
`Request` object. (The `Request` constructor takes the URL as a string
and parses it, so the object URL must be live at the time it is called.)
This PR adds a new `blobFromObjectUrl` JS function (backed by a new
`op_blob_from_object_url` op) that, if the URL is a valid object URL,
returns a new `Blob` object whose parts are references to the same Rust
`BlobPart`s used by the original `Blob` object. It uses this function to
add a new `blobUrlEntry` field to inner requests, which will be `null`
or such a `Blob`, and then uses `Blob.prototype.stream()` as the
response's body. As a result of this, the `blob:` URL resolution from
`op_fetch` is now useless, and has been removed.
This adds support for the URLPattern API.
The API is added in --unstable only, as it has not yet shipped in any
browser. It is targeted for shipping in Chrome 95.
Spec: https://wicg.github.io/urlpattern/
Co-authored-by: crowlKats < crowlkats@toaxl.com >
Classic worker scripts are now executed in the context of a Tokio
runtime. This does mean we can not spawn more tokio runtimes in
"op_worker_sync_fetch". We instead spawn a new thread there, that can
create a new Tokio runtime that we can use to block the worker thread.
This commit implements classic workers, but only when the `--enable-testing-features-do-not-use` flag is provided. This change is not user facing. Classic workers are used extensively in WPT tests. The classic workers do not support loading from disk, and do not support TypeScript.
Co-authored-by: Luca Casonato <hello@lcas.dev>
Currently, calling the `abort()` method on a `FileReader` object aborts
any current read operation, but it also prevents any read operation
started at some later point from starting. The File API instead
specifies that calling `abort()` should reset the `FileReader`'s state
and result, as well as removing any queued tasks from the current
operation that haven't yet run.
Currently, a WPT test is considered failed if its status code is
anything other than 0, regardless of whether the test suite completed
running or not, and any subtests that haven't finished running are not
considered to be failures.
But a test can exit with a zero status code before it has completed
running, if the event loop has run out of tasks because of a bug in one
of the ops, leading to false positives. This change fixes that.
The WebAssembly streaming APIs used to be enabled, but used to take
buffer sources as their first argument (see #6154 and #7259). This
change re-enables them, requiring a Promise<Response> instead, as well as
enabling asynchronous compilation of WebAssembly modules.
Additionally, if the existing `Request`'s body is disturbed, the Request creation
should fail.
This change also updates the step numbers in the Request constructor to match
whatwg/fetch#1249.
This adds a daily scheduled CI pipeline that runs WPT tests against
the most recent epochs/daily every night. Results are uploaded to
wpt.fyi.
WPTs are run on all supported platforms, on both stable and canary.
These reports can be consumed by tools like `wptreport` or
https://wpt.fyi. The old style report could be removed in a future PR
when wpt.deno.land is updated.
This commit removes all JS based text encoding / text decoding. Instead
encoding now happens in Rust via encoding_rs (already in tree). This
implementation retains stream support, but adds the last missing
encodings. We are incredibly close to 100% WPT on text encoding now.
This should reduce our baseline heap by quite a bit.
Replaces the file-backed provider by an in-memory one because proper
file locking is a hard problem that detracts from the proof of concept.
Teach the WPT runner how to extract tests from .html files because all
the relevant tests in test_util/wpt/webmessaging/broadcastchannel are
inside basics.html and interface.html.
This commit aligns the `fetch` API and the `Request` / `Response`
classes belonging to it to the spec. This commit enables all the
relevant `fetch` WPT tests. Spec compliance is now at around 90%.
Performance is essentially identical now (within 1% of 1.9.0).
This commit aligns `Headers` to spec. It also removes the now unused
03_dom_iterable.js file. We now pass all relevant `Headers` WPT. We do
not implement any sort of header filtering, as we are a server side
runtime.
This is likely not the most efficient implementation of `Headers` yet.
It is however spec compliant. Once all the APIs in the `HTTP` hot loop
are correct we can start optimizing them. It is likely that this commit
reduces bench throughput temporarily.
- Improves op performance.
- Handle op-metadata (errors, promise IDs) explicitly in the op-layer vs
per op-encoding (aka: out-of-payload).
- Remove shared queue & custom "asyncHandlers", all async values are
returned in batches via js_recv_cb.
- The op-layer should be thought of as simple function calls with little
indirection or translation besides the conceptually straightforward
serde_v8 bijections.
- Preserve concepts of json/bin/min as semantic groups of their
inputs/outputs instead of their op-encoding strategy, preserving these
groups will also facilitate partial transitions over to v8 Fast API for the
"min" and "bin" groups
This commit rewrites scripts in "tools/" directory
to use Deno instead of Python. In return it allows
to remove huge number of Python packages in "third_party/".
All benchmarks are done in Rust and can be invoked with
`cargo bench`.
Currently this has it's own "harness" that behaves like
`./tools/benchmark.py` did.
Because of this tests inside `cli/bench` are currently not run.
This should be switched to the language provided harness
once the `#[bench]` attribute has been stabilized.
This PR is intentionally ugly. It duplicates all of the code in cli/js2/ into
cli/tsc/ ... because it's very important that we all understand that this code
is unnecessarily duplicated in our binary. I hope this ugliness provides the
motivation to clean it up.
The typescript git submodule is removed, because it's a very large repo and
contains all sorts of stuff we don't need. Instead the necessary files are
copied directly into the deno repo. Hence +200k lines.
COMPILER_SNAPSHOT.bin size
```
master 3448139
this branch 3320972
```
Fixes #6812
This commit adds a "--no-check" option to following subcommands:
- "deno cache"
- "deno info"
- "deno run"
- "deno test"
The "--no-check" options allows to skip type checking step and instead
directly transpiles TS sources to JS sources.
This solution uses `ts.transpileModule()` API and is just an interim
solution before implementing it fully in Rust.
This commit changes how error occurring in SWC are handled.
Changed lexer settings to properly handle TS decorators.
Changed output of SWC error to annotate with position in file.
This commit fixes a bug introduced in #5029 that caused bad
handling of redirects during module analysis.
Also ensured that duplicate modules are not downloaded.
This PR removes the hack in CLI that allows to run scripts with shorthand: deno script.ts.
Removing this functionality because it hacks around short-comings of clap our CLI parser. We agree that this shorthand syntax is desirable, but it needs to be rethinked and reimplemented. For 1.0 we should go with conservative approach that is correct.
- Removes the __fetch namespace from `deno types`
- Response.redirect should be a static.
- Response.body should not be AsyncIterable.
- Disables the deno_proxy benchmark
- Makes std/examples/curl.ts buffer the body before printing to stdout
* Remove DENO_BUILD_MODE and DENO_BUILD_PATH
Also remove outdated docs related to ninja/gn.
* fix
* remove parameter to build_mode()
* remove arg parsing from benchmark.py
* add tests for "Deno.core.encode" and "Deno.core.decode" for empty inputs
* use "Deno.core.encode" in "TextEncoder"
* use "Deno.core.decode" in "TextDecoder"
* remove "core_decode" and "core_encode" benchmarks
This commits add two new methods to "Deno.core" namespace: "encode" and "decode".
Those methods are bound in Rust to provide a) fast b) generally available of encoding and decoding UTF-8 strings.
Both methods are now used in "cli/js/dispatch_json.ts".
Rewrites "cli/js/unit_test_runner.ts" to communicate with spawned subprocesses
using TCP socket.
* Rewrite "Deno.runTests()" by factoring out testing logic to private "TestApi"
class. "TestApi" implements "AsyncIterator" that yields "TestEvent"s,
which is an interface for different types of event occuring during running
tests.
* Add "reporter" argument to "Deno.runTests()" to allow users to provide custom
reporting mechanism for tests. It's represented by "TestReporter" interface,
that implements hook functions for each type of "TestEvent". If "reporter"
is not provided then default console reporting is used (via
"ConsoleReporter").
* Change how "unit_test_runner" communicates with spawned suprocesses. Instead
of parsing text data from child's stdout, a TCP socket is created and used
for communication. "unit_test_runner" can run in either "master" or "worker"
mode. Former is responsible for test discovery and establishing needed
permission combinations; while latter (that is spawned by "master") executes
tests that match given permission set.
* Use "SocketReporter" that implements "TestReporter" interface to send output
of tests to "master" process. Data is sent as stringified JSON and then
parsed by "master" as structured data. "master" applies it's own reporting
logic to output tests to console (by reusing default "ConsoleReporter").
This change simplifies how we execute V8. Previously V8 Isolates jumped
around threads every time they were woken up. This was overly complex and
potentially hurting performance in a myriad ways. Now isolates run on
their own dedicated thread and never move.
- blocking_json spawns a thread and does not use a thread pool
- op_host_poll_worker and op_host_resume_worker are non-operational
- removes Worker::get_message and Worker::post_message
- ThreadSafeState::workers table contains WorkerChannel entries instead
of actual Worker instances.
- MainWorker and CompilerWorker are no longer Futures.
- The multi-threaded version of deno_core_http_bench was removed.
- AyncOps no longer need to be Send + Sync
This PR is very large and several tests were disabled to speed
integration:
- installer_test_local_module_run
- installer_test_remote_module_run
- _015_duplicate_parallel_import
- _026_workers
This flag was added to evaluate performance relative to tokio's threaded
runtime. Although it's faster in the HTTP benchmark, it's clear the runtime
is not the only perf problem.
Removing this flag will simplify further refactors, in particular
adopting the #[tokio::main] macro. This will be done in a follow up.
Ultimately we expect to move to the current thread runtime with Isolates
pinned to specific threads, but that will be a much larger refactor. The
--current-thread just complicates that effort.