Construct a new module graph container for workers instead of sharing it
with the main worker.
Fixes #17248
Fixes #23461
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
Related: https://github.com/denoland/eszip/pull/181
eszip < v0.69.0 hashes all its contents to ensure data integrity. This
feature is not necessary in Deno CLI as the binary integrity guarantee
is deemed an external responsibility (ie it is to be assumed that, if
necessary, the compiled binary will be checksumed externally prior to
being executed).
eszip >= v0.69.0 no longer performs this checksum by default. This
reduces the cold-start time of the compiled binaries, proportionally to
their size.
VScode will typically send a `textDocument/semanticTokens/full` request
followed by `textDocument/semanticTokens/range`, and occassionally
request semantic tokens even when we know nothing has changed. Semantic
tokens also get refreshed on each change. Computing semantic tokens is
relatively heavy in TSC, so we should avoid it as much as possible.
Caches the semantic tokens for open documents, to avoid making TSC do
unnecessary work. Results in a noticeable improvement in local
benchmarking
before:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 383ms)
- Big Document/Several Edits
(5 runs, mean: 1079ms)
- Find/Replace
(10 runs, mean: 59ms)
- Code Lens
(10 runs, mean: 440ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 9921ms)
<- End benchmarking lsp
```
after:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 395ms)
- Big Document/Several Edits
(5 runs, mean: 1024ms)
- Find/Replace
(10 runs, mean: 56ms)
- Code Lens
(10 runs, mean: 438ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 8927ms)
<- End benchmarking lsp
```
This PR directly addresses the issue raised in #23282 where Deno panics
if `deno coverage` is called with `--include` regex that returns no
matches.
I've opted not to change the return value of `collect_summary` for
simplicity and return an empty `HashMap` instead
Moves sloppy import resolution from the loader to the resolver.
Also adds some test helper functions to make the lsp tests less verbose
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
1. Generally we should prefer to use the `log` crate.
2. I very often accidentally commit `eprintln`s.
When we should use `println` or `eprintln`, it's not too bad to be a bit
more verbose and ignore the lint rule.
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.43.2
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.43.2 && git checkout -b forward_v1.43.2 upstream/forward_v1.43.2
```
Don't need this PR? Close it.
cc @nathanwhit
Co-authored-by: nathanwhit <nathanwhit@users.noreply.github.com>
Co-authored-by: Nathan Whitaker <nathan@deno.com>
This PR implements the changes we plan to make to `deno install` in deno
2.0.
- `deno install` without arguments caches dependencies from
`package.json` / `deno.json` and sets up the `node_modules` folder
- `deno install <pkg>` adds the package to the config file (either
`package.json` or `deno.json`), i.e. it aliases `deno add`
- `deno add` can also add deps to `package.json` (this is gated behind
`DENO_FUTURE` due to uncertainty around handling projects with both
`deno.json` and `package.json`)
- `deno install -g <bin>` installs a package as a globally available
binary (the same as `deno install <bin>` in 1.0)
---------
Co-authored-by: Nathan Whitaker <nathan@deno.com>
Fixes the `Debug Failure` errors described in
https://github.com/denoland/deno/issues/23643#issuecomment-2094552765 .
The issue here was that we were passing diagnostic codes as strings but
TSC expects the codes to be numbers. This resulted in some quick fixes
not working (as illustrated by the test added here which fails before
this PR).
The first commit is the actual fix. The rest are just test related.
Fixes #23643.
We weren't catching the cancellation exception thrown by TSC on the JS
side, so the rust side was catching this exception and then attempting
to print out the exception via `toString`. That last bit resulted in a
cryptic `[object Object]` showing up in the logs like so:
```
Error during TS request "getCompletionEntryDetails":
[object Object]
```
I'm not 100% sure how we weren't seeing this in the past. My guess is
that #23409 and the subsequent PR to improve the exception catching and
logging surfaced this, but I'm still not quite clear on it.
My initial fix here returned `null` to rust when a server request was
cancelled, but this resulted in a deserialization error when we
attempted to deserialize that into the expected response type. So now,
as soon as the request's cancellation token signals we'll stop waiting
for a response and return an error (which will get swallowed as the LSP
request is being cancelled).
I was a bit surprised to find that [this
branch](0c671c9792/cli/lsp/tsc.rs (L1093))
actually executes sometimes, I believe due to the fact that aborting a
future may not [immediately stop its
execution](https://docs.rs/futures/latest/futures/stream/struct.AbortHandle.html#method.abort).
This makes `create_runtime_snapshot` more useful for embedders who add
their own extension(s) to the runtime in build scripts.
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Before this PR, there would just be an uninformative "Error occurred"
message, after this PR you'll get a stack trace in the LSP output window
like this:
```text
Error during TS request "$getSupportedCodeFixes":
Error: i threw an exception
at serverRequest (ext:deno_tsc/99_main_compiler.js:1089:11)
```
By default, `deno serve` will assign port 8000 (like `Deno.serve`).
Users may choose a different port using `--port`.
`deno serve /tmp/file.ts`
`server.ts`:
```ts
export default {
fetch(req) {
return new Response("hello world!\n");
},
};
```
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
This commit changes the workspace support to provide all workspace
members to be available as imports based on their names and versions.
Closes https://github.com/denoland/deno/issues/23343
<!--
Before submitting a PR, please read
https://docs.deno.com/runtime/manual/references/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
This PR wires up a new `jsxPrecompileSkipElements` option in
`compilerOptions` that can be used to exempt a list of elements from
being precompiled with the `precompile` JSX transform.
The actual handling of `$projectChanged` is quick, but JS requests are
not. The cleared caches only get repopulated on the next actual request,
so just batch the change notification in with the next actual request.
No significant difference in benchmarks on my machine, but this speeds
up `did_change` handling and reduces our total number of JS requests (in
addition to coalescing multiple JS change notifs into one).
Embedders may have special requirements around file opening, so we add a
new `check_open` permission check that is called as part of the file
open process.
Adds an `addr` field to `HttpServer` to simplify the pattern
`Deno.serve({ onListen({ port } => listenPort = port })`. This becomes:
`const server = Deno.serve({}); port = server.addr.port`.
Changes:
- Refactors `serve` overloads to split TLS out (in preparation for
landing a place for the TLS SNI information)
- Adds an `addr` field to `HttpServer` that matches the `addr` field of
the corresponding `Deno.Listener`s.
This PR enables V8 code cache for ES modules and for `require` scripts
through `op_eval_context`. Code cache artifacts are transparently stored
and fetched using sqlite db and are passed to V8. `--no-code-cache` can
be used to disable.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This allows people to use imports like:
```ts
import "./app.css";
```
...with `deno check` in systems where there's a bundle step (ex. Vite).
This will still error when using it with `deno run` or if the referenced
file does not exist.
See test cases for behaviour.
This PR adds a benchmark intended to measure how the LSP handles larger
repos, as well as its performance on a more realistic workload.
The repo being benchmarked is
[deco-cx/apps](https://github.com/deco-cx/apps) which has been vendored
along with its dependencies. It's included as a git submodule as its
fairly large. The LSP requests used in the benchmark are the actual
requests sent by VSCode as I opened, modified, and navigated around a
file (to simulate an actual user interaction).
The main motivation is to have a more realistic benchmark that measures
how we do with a large number of files and dependencies. The
improvements made from 1.42 to 1.42.3 mostly improved performance with
larger repos, so none of our existing benchmarks showed an improvement.
Here are the results for the changes made from 1.42 to 1.42.3 (the new
benchmark is the last one listed):
**1.42.0**
```test
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 379ms)
- Big Document/Several Edits
(5 runs, mean: 1142ms)
- Find/Replace
(10 runs, mean: 51ms)
- Code Lens
(10 runs, mean: 443ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 25121ms)
<- End benchmarking lsp
```
**1.42.3**
```text
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 383ms)
- Big Document/Several Edits
(5 runs, mean: 1135ms)
- Find/Replace
(10 runs, mean: 55ms)
- Code Lens
(10 runs, mean: 440ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 11675ms)
<- End benchmarking lsp
```
`TestEventSender` should not be Clone so we don't end up with multiple
copies of the same writer FD. This is probably not the cause of the test
channel lockups, but it's a lot easier to reason about.
Due to a terminating NUL that was placed in a `r#` string, we were not
actually NUL-terminating pipe names on Windows. While this has no
security implications due to the random nature of the prefix, it would
occasionally cause random failures when the trailing garbage would make
the pipe name invalid.
This commit moves logic of dispatching lifecycle events (
"load", "beforeunload", "unload") to be triggered from Rust.
Before that we were executing scripts from Rust, but now we
are storing references to functions from "99_main.js" and calling
them directly.
Prerequisite for https://github.com/denoland/deno/issues/23342
I'm running into a node resolution bug in the lsp only and while
tracking it down I noticed this one.
Fixed by moving the project version out of `Documents`.
…faces (#23296)"
This reverts commit e190acbfa8.
Reverting because it broke stable API type declarations. We will reland
it for v1.43 with updated interfaces
Fixes the regression described in
https://github.com/denoland/deno/pull/23293#issuecomment-2049819724.
This affected jupyter notebooks, as the LSP was passing in already
denormalized specifiers, while the jupyter kernel was not. We need to
denormalize the specifiers to evict the proper keys from our caches.
Currently we evict a lot of the caches on the JS side of things on every
request, namely script versions, script file names, and compiler
settings (as of #23283, it's not quite every request but it's still
unnecessarily often).
This PR reports changes to the JS side, so that it can evict exactly the
caches that it needs too. We might want to do some batching in the
future so as not to do 1 request per change.
This is PR a smaller retry of #23066 that simply ensures all async
`ext/fs` ops are accounted for if left hanging in tests. This also sorts
the `OP_DETAILS` in alphabetical order for easy future reading.
When reviewing, it might be best to look at the commits in order for
better understanding.
Removes the certificate options from all the interfaces and replaces
them with a new `TlsCertifiedKeyOptions`. This allows us to centralize
the documentation for TLS key management for both client and server, and
will allow us to add key object support in the future.
Also adds an option `keyFormat` field to the cert/key that must be
omitted or set to `pem`. This will allow us to load other format keys in
the future `der`, `pfx`, etc.
In a future PR, we will add a way to load a certified key object, and we
will add another option to `TlsCertifiedKeyOptions` like so:
```ts
export interface TlsCertifiedKeyOptions =
| TlsCertifiedKeyPem
| TlsCertifiedKeyFromFile
| TlsCertifiedKeyConnectTls
| { key: Deno.CertifiedKey }
```
Previously we locked the entire `FileSystemDocuments` even for lookups,
causing contention. This was particularly bad because some of the hot
ops (namely `op_resolve`) can end up hitting that lock under contention.
This PR replaces the mutex with synchronization internal to
`FileSystemDocuments` (an `AtomicBool` for the dirty flag, and then a
`DashMap` for the actual documents).
I need to think a bit more about whether or not this introduces any
problematic race conditions.
Changes `discreet` in the documentation for `discrete`
"Discreet" means careful to avoid being noticed, "discrete" means
separate parts, and is what the documentation refers to.
The TS language service requests source files via
[getSourceFile](7a25fd5ef0/cli/tsc/99_main_compiler.js (L560)).
In that function, we [unconditionally
add](7a25fd5ef0/cli/tsc/99_main_compiler.js (L613-L614))
the source file to our sourceFileCache. The issue is that we only remove
things from that cache if the source file [becomes out of
date](7a25fd5ef0/cli/tsc/99_main_compiler.js (L777-L783)).
For files that don't get changed, we keep them in the cache
indefinitely. So sometimes we keep SourceFile objects from being GC'ed
because they're retained in our cache, even though TS doesn't refer to
them any more. I see this in pretty much all of the heap snapshots I've
taken.
---
The fix here is pretty direct - just store weak references to the
sourcefiles in the cache. It doesn't really change our caching behavior,
it just prevents us from being the only retainer of a `SourceFile`. I
also split the `sourceFileCache` into a separate cache just for assets,
as we rely on those being alive.
The simpler fix is to only cache assets, but presumably that has a perf
impact.
---
In local testing, this PR reduced the size of the JS heap by about 1 GB
when using `deno lsp` in the Typescript repo.
This functionality was broken. The series of events was:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents.
3. Clear the npm snapshot.
4. Redo npm resolution with the new specifiers (~500ms).
What this now does:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents and take
into account the specifiers from the lockfile.
3. Do not redo resolution (~1ms).
The tests would deadlock if we tried to write the sync marker into a
pipe that was full because one test streamed just enough data to fill
the pipe, so when we went to actually write the sync marker we blocked
when nobody was reading.
We use a two-phase lock for sync markers now: one to indicate "ready to
sync" and the second to indicate that the sync bytes have been received.
This commit adds enum to "InstallFlags" and "UninstallFlags" that will
allow to support both local and global (un)installation.
Currently the local variant is not used.
Towards https://github.com/denoland/deno/issues/23062
Fixes #23163.
The client-facing warning doesn't provide any value and is super
annoying. We still emit a warning message on the server side for format
errors, which should fulfill the same (less intrusive) purpose.
When `DENO_FUTURE=1` env var is present, then BYONM
("bring your own node_modules") is enabled by default.
That means that is there's a `package.json` present, users
are expected to explicitly install dependencies from that file.
Towards https://github.com/denoland/deno/issues/23151
Was doing a bit of debugging on why some stuff is not working in a
personal project and ran a quick debug profile and saw it cloning the
pkg json a lot. We should put this in an Rc.
Unused locals and parameters don't make sense to surface in remote
modules. Additionally, fast check can cause these kind of diagnostics
when publishing, so they should be ignored.
Closes #22959
Was investigating a separate stack overflow (that I've now found in the
node resolution code) and came across this. We should avoid recursion
(this is very old code).
This PR introduces the ability to exclude certain paths from the file watcher
in Deno. This is particularly useful when running scripts in watch mode,
as it allows developers to prevent unnecessary restarts when changes are
made to files that do not affect the running script, or when executing
scripts that generate new files which results in an infinite restart
loop.
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
In preparation for upcoming changes to `deno install` in Deno 2.
If `-g` or `--global` flag is not provided a warning will be emitted:
```
⚠️ `deno install` behavior will change in Deno 2. To preserve the current behavior use `-g` or `--global` flag.
```
The same will happen for `deno uninstall` - unless `-g`/`--global` flag
is provided
a warning will be emitted.
Towards https://github.com/denoland/deno/issues/23062
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: David Sherret <dsherret@users.noreply.github.com>
This commit changes "deno init" subcommand to use "jsr:" specifier for
standard library "assert" module. It is unversioned, but we will change
it to `@^1` once `@std/assert` release version 1.0.
This allows us to start decoupling `deno` and `deno_std` release. The
release scripts have been updated to take that into account.
Fixes #23053.
Two small bugs here:
- the existing condition for printing out the group header was broken.
it worked in the reproducer (in the issue above) without filtering only
by accident, due to setting `self.has_ungrouped = true` once we see the
warmup bench. Knowing that we sort benchmarks to put ungrouped benches
first, there are only two cases: 1) we are starting the first group 2)
we are ending the previous group and starting a new group
- when you passed `--filter` we were applying that filter to the warmup
bench (which is not visible to users), so we suffered from jit bias if
you were filtering (unless your filter was `<warmup>`)
TLDR;
Running
```bash
deno bench main.js --filter="G"
```
```js
// main.js
Deno.bench({
group: "G1",
name: "G1-A",
fn() {},
});
Deno.bench({
group: "G1",
name: "G1-B",
fn() {},
});
```
Before this PR:
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
--------------------------------------------------------------- -----------------------------
G1-A 303.52 ps/iter3,294,726,102.1 (254.2 ps … 7.8 ns) 287.5 ps 391.7 ps 437.5 ps
G1-B 3.8 ns/iter 263,360,635.9 (2.24 ns … 8.36 ns) 3.84 ns 4.73 ns 4.94 ns
summary
G1-A
12.51x faster than G1-B
```
After this PR:
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
--------------------------------------------------------------- -----------------------------
group G1
G1-A 3.85 ns/iter 259,822,096.0 (2.42 ns … 9.03 ns) 3.83 ns 4.62 ns 4.83 ns
G1-B 3.84 ns/iter 260,458,274.5 (3.55 ns … 7.05 ns) 3.83 ns 4.45 ns 4.7 ns
summary
G1-B
1x faster than G1-A
```
Before this PR, we didn't have any integration tests set up for the
`jupyter` subcommand.
This PR adds a basic jupyter client and helpers for writing integration
tests for the jupyter kernel. A lot of the code here is boilerplate,
mainly around the message format for jupyter.
This also adds a few basic integration tests, most notably for
requesting execution of a snippet of code and getting the correct
results.
This patch gets JUnit reporter to output more detailed information for
test steps (subtests).
## Issue with previous implementation
In the previous implementation, the test hierarchy was represented using
several XML tags like the following:
- `<testsuites>` corresponds to the entire test (one execution of `deno
test` has exactly one `<testsuites>` tag)
- `<testsuite>` corresponds to one file, such as `main_test.ts`
- `<testcase>` corresponds to one `Deno.test(...)`
- `<property>` corresponds to one `t.step(...)`
This structure describes the test layers but one problem is that
`<property>` tag is used for any use cases so some tools that can ingest
a JUnit XML file might not be able to interpret `<property>` as
subtests.
## How other tools address it
Some of the testing frameworks in the ecosystem address this issue by
fitting subtests into the `<testcase>` layer. For instance, take a look
at the following Go test file:
```go
package main_test
import "testing"
func TestMain(t *testing.T) {
t.Run("child 1", func(t *testing.T) {
// OK
})
t.Run("child 2", func(t *testing.T) {
// Error
t.Fatal("error")
})
}
```
Running [gotestsum], we can get the output like this:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="3" failures="2" errors="0" time="1.013694">
<testsuite tests="3" failures="2" time="0.510000" name="example/gosumtest" timestamp="2024-03-11T12:26:39+09:00">
<properties>
<property name="go.version" value="go1.22.1 darwin/arm64"></property>
</properties>
<testcase classname="example/gosumtest" name="TestMain/child_2" time="0.000000">
<failure message="Failed" type="">=== RUN TestMain/child_2
 main_test.go:12: error
--- FAIL: TestMain/child_2 (0.00s)
</failure>
</testcase>
<testcase classname="example/gosumtest" name="TestMain" time="0.000000">
<failure message="Failed" type="">=== RUN TestMain
--- FAIL: TestMain (0.00s)
</failure>
</testcase>
<testcase classname="example/gosumtest" name="TestMain/child_1" time="0.000000"></testcase>
</testsuite>
</testsuites>
```
This output shows that nested test cases are squashed into the
`<testcase>` layer by treating them as the same layer as their parent,
`TestMain`. We can still distinguish nested ones by their `name`
attributes that look like `TestMain/<subtest_name>`.
As described in #22795, [vitest] solves the issue in the same way as
[gotestsum].
One downside of this would be that one test failure that happens in a
nested test case will end up being counted multiple times, because not
only the subtest but also its wrapping container(s) are considered to be
failures. In fact, in the [gotestsum] output above, `TestMain/child_2`
failed (which is totally expected) while its parent, `TestMain`, was
also counted as failure. As
https://github.com/denoland/deno/pull/20273#discussion_r1307558757
pointed out, there is a test runner that offers flexibility to prevent
this, but I personally don't think the "duplicate failure count" issue
is a big deal.
## How to fix the issue in this patch
This patch fixes the issue with the same approach as [gotestsum] and
[vitest].
More specifically, nested test cases are put into the `<testcase>` level
and their names are now represented as squashed test names concatenated
by `>` (e.g. `parent 2 > child 1 > grandchild 1`). This change also
allows us to put a detailed error message as `<failure>` tag within the
`<testcase>` tag, which should be handled nicely by third-party tools
supporting JUnit XML.
## Extra fix
Also, file paths embedded into XML outputs are changed from absolute
path to relative path, which is helpful when running the test suites in
several different environments like CI.
Resolves #22795
[gotestsum]: https://github.com/gotestyourself/gotestsum
[vitest]: https://vitest.dev/
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This has been incorrect since the function adopted its (more intuitive)
current behavior in 9268df5f3. The same behavior change was backported
to v1.39.3 in 87e954f54.
Fixes #22941.
In that case, the only file with coverage was the `test.ts` file. The
coverage reporter filters out test files before compiling its report, so
after filtering we were left with an empty set of files. Later on it's
assumed that there is at least 1 file to be reported on, and we panic.
Instead of panicking, just issue an error after filtering.
In addition to the reasons for this outlined by @nayeemrmn in #14877
(which I think are reasons alone to not do this), this simplifies things
a lot because then we don't need to implement the following:
1. Need to handle a JSR module dynamically importing a module within it.
2. Need to handle importing an export of a JSR dep then another export
dynamically loaded later.
Additionally, people should be running `deno check dynamic_import.ts`
instead of relying on this behaviour.
Landing this as a fix because it's blocking people in some scenarios and
the current behaviour is broken (I didn't even have to change any tests
to remove this, which is bad).
Closes #22852
Closes #14877
Closes #22580
Skips the access check if the specific unary permission is in an
all-granted state. Generally prevents an allocation or two.
Hooks up a quiet "all" permission that is automatically inherited. This
permission will be used in the future to indicate that the user wishes
to accept all side-effects of the permissions they explicitly granted.
The "all" permission is an "ambient flag"-style permission that states
whether "allow-all" was passed on the command-line.
Issue https://github.com/denoland/deno/issues/22222
![image](https://github.com/denoland/deno/assets/34997667/2af8474b-b919-4519-98ce-9d29bc7829f2)
This PR moves `runtime/permissions` code to a upstream crate called
`deno_permissions`. The `deno_permissions::PermissionsContainer` is put
into the OpState and can be used instead of the current trait-based
permissions system.
For this PR, I've migrated `deno_fetch` to the new crate but kept the
rest of the trait-based system as a wrapper of `deno_permissions` crate.
Doing the migration all at once is error prone and hard to review.
Comparing incremental compile times for `ext/fetch` on Mac M1:
| profile | `cargo build --bin deno` | `cargo plonk build --bin deno` |
| --------- | ------------- | ------------------- |
| `debug` | 20 s | 0.8s |
| `release` | 4 mins 12 s | 1.4s |
This commit fixes race condition in "node:worker_threads" module were
the first message did a setup of "threadId", "workerData" and
"environmentData".
Now this data is passed explicitly during workers creation and is set up
before any user code is executed.
Closes https://github.com/denoland/deno/issues/22783
Closes https://github.com/denoland/deno/issues/22672
---------
Co-authored-by: Satya Rohith <me@satyarohith.com>
This is an unrealistic scenario, but it's still a good thing to fix and
have a test for because it probably fixes some other underlying issues
with how the gitignore was being resolved for the root directory.
From https://github.com/denoland/deno/pull/22720#issuecomment-1986134425
Previously the sloppy resolver could not resolve the following:
- foo/bar.ts
- foo.ts
- index.ts
Where `index.ts` contains `import "./foo"`, because it did not consider
`foo.ts` a valid target for this directory import.
This commit fixes this bug.
This is the release commit being forwarded back to main for 1.41.2
Signed-off-by: Divy Srivastava <dj.srivastava23@gmail.com>
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
This allows explicitly overriding a .gitignore by specifying files and
directories in "include". This does not apply to globs in an include as
files matching those will still be gitignored. Additionally,
individually gitignored files within an included directory will still be
ignored.
1. Stops `deno publish` using some custom include/exclude behaviour from
other sub commands
2. Takes ancestor directories into account when resolving gitignore
3. Backards compatible change that adds ability to unexclude an exclude
by using a negated glob at a more specific level for all sub commands
(see https://github.com/denoland/deno_config/pull/44).
We emitted `import "./` rather than `import "./$NAME"`. This is now
fixed.
Also makes a cosmetic change so that `../` imports are now just imported
as `../`, not `./../`.
An undocumented "DENO_DISABLE_PEDANTIC_NODE_WARNINGS" env
var can be used to silence warnings for sloppy imports and node builtins
without `node:` prefix.
This was showing up on the flamegraph.
```
14:54 $ hyperfine -S none --warmup 25 '/tmp/deno run /tmp/empty.js' 'target/release/deno run /tmp/empty.js'
Benchmark 1: /tmp/deno run /tmp/empty.js
Time (mean ± σ): 17.2 ms ± 4.7 ms [User: 11.2 ms, System: 4.0 ms]
Range (min … max): 15.1 ms … 72.9 ms 172 runs
Warning: Statistical outliers were detected. Consider re-running this benchmark on a quiet system without any interferences from other programs. It might help to use the '--warmup' or '--prepare' options.
Benchmark 2: target/release/deno run /tmp/empty.js
Time (mean ± σ): 16.7 ms ± 1.1 ms [User: 11.1 ms, System: 4.0 ms]
Range (min … max): 15.0 ms … 20.1 ms 189 runs
Summary
'target/release/deno run /tmp/empty.js' ran
1.03 ± 0.29 times faster than '/tmp/deno run /tmp/empty.js'
✔ ~/Documents/github/deno/deno [faster_extract|…5⚑ 23]
```
The diagnostic was incorrect when importing a `.js` file with a
corresponding `.d.ts` file with sloppy imports because it would say to
change the `.js` extension to `.d.ts`, which is incorrect. We might as
well just hide this diagnostic.
Improves #19100
Fixes #20356
Replaces #20428
Changes made in deno_core to support this:
- [x] Errors must be handled in setTimeout callbacks
- [x] Microtask ordering is not-quite-right
- [x] Timer cancellation must be checked right before dispatch
- [x] Timer sanitizer
- [x] Move high-res timer to deno_core
- [x] Timers need opcall tracing
This commit adds "deno add" subcommand that has a basic support for
adding "jsr:" packages to "deno.json" file.
This currently doesn't support "npm:" specifiers and specifying version
constraints.
Some `deno_std` tests were failing to print output that was resolved
after the last test finished. In addition, output printed before tests
began would sometimes appear above the "running X tests ..." line, and
sometimes below it depending on timing.
We now guarantee that all output is flushed before and after tests run,
making the output consistent.
Pre-test and post-test output are captured in `------ pre-test output
------` and `------ post-test output ------` blocks to differentiate
them from the regular output blocks.
Here's an example of a test (that is much noisier than normal, but an
example of what the output will look like):
```
Check ./load_unload.ts
------- pre-test output -------
load
----- output end -----
running 1 test from ./load_unload.ts
test ...
------- output -------
test
----- output end -----
test ... ok ([WILDCARD])
------- post-test output -------
unload
----- output end -----
```
A security feature of JSR is that it is self contained other than npm
dependencies. At publish time, the registry rejects packages that write
code like this:
```ts
const data = await import("https://example.com/evil.js");
```
However, this can be trivially bypassed by writing code that the
registry cannot statically analyze for. This PR prevents Deno from
loading dynamic imports that do this.
As we add tracing to more types of runtime activity, `--trace-ops` is
less useful of a name. `--trace-leaks` better reflects that this feature
traces both ops and timers, and will eventually trace resource opening
as well.
This keeps `--trace-ops` as an alias for `--trace-leaks`, but prints a
warning to the console suggesting migration to `--trace-leaks`.
One test continues to use `--trace-ops` to test the deprecation warning.
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
- Removes the origin call, since all origins are the same for an isolate
(ie: the main module)
- Collects the `TestDescription`s and sends them all at the same time
inside of an Arc, allowing us to (later on) re-use these instead of
cloning.
Needs a follow-up pass to remove all the cloning, but that's a thread
that is pretty long to pull
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Supply chain security for JSR.
```
$ deno publish --provenance
Successfully published @divy/test_provenance@0.0.3
Provenance transparency log available at https://search.sigstore.dev/?logIndex=73657418
```
0. Package has been published.
1. Fetches the version manifest and verifies it's matching with uploaded
files and exports.
2. Builds the attestation SLSA payload using Github actions env.
3. Creates an ephemeral key pair for signing the github token
(aud=sigstore) and DSSE pre authentication tag.
4. Requests a X.509 signing certificate from Fulcio using the challenge
and ephemeral public key PEM.
5. Prepares a DSSE envelop for Rekor to witness. Posts an intoto entry
to Rekor and gets back the transparency log index.
6. Builds the provenance bundle and posts it to JSR.
Gets us closer to solving #20707.
Rewrites the `TestEventSender`:
- Allow for explicit creation of multiple streams. This will allow for
one-std{out,err}-per-worker
- All test events are received along with a worker ID, allowing for
eventual, proper parallel threading of test events.
In theory this should open up proper interleaving of test output,
however that is left for a future PR.
I had some plans for a better performing synchronization primitive, but
the inter-thread communication is tricky. This does, however, speed up
the processing of large numbers of tests 15-25% (possibly even more on
100,000+).
Before
```
ok | 1000 passed | 0 failed (32ms)
ok | 10000 passed | 0 failed (276ms)
```
After
```
ok | 1000 passed | 0 failed (25ms)
ok | 10000 passed | 0 failed (230ms)
```
<!--
Before submitting a PR, please read
https://docs.deno.com/runtime/manual/references/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
This PR enhances the `deno publish` command to infer dependencies from
`package.json` if present.
Updates dependent crates which includes an investigation fix by @irbull
in Deno's LSP when linting code. Huge thanks to Ian for tracking down
this issue.
Also includes Divy's deno_graph executor change, which reduces memory
usage when loading jsr specifiers and makes them faster.
Co-authored-by: irbull <irbull@users.noreply.github.com>
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
When using a prefix or suffix containing an invalid filename character,
it's not entirely clear where the errors come from. We make these errors
more consistent across platforms.
In addition, all permission prompts for tempfile and tempdir were
printing the same API name.
We also take the opportunity to make the tempfile random space larger by
2x (using a base32-encoded u64 rather than a hex-encoded u32).
1. Renames zap/fast-check to instead be a `no-slow-types` lint rule.
1. This lint rule is automatically run when doing `deno lint` for
packages (deno.json files with a name, version, and exports field)
1. This lint rules still occurs on publish. It can be skipped by running
with `--no-slow-types`
This change deprecates
`Deno.CreateHttpClientOptions.{certChain,privateKey}` in favour of
`Deno.CreateHttpClientOptions.{cert,key}`.
Closes #22278
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
The format of the sanitizers will change a little bit:
- If multiple async ops leak and traces are on, we repeat the async op
header once per stack trace.
- All leaks are aggregated under a "Leaks detected:" banner as the new
timers are eventually going to be added, and these are neither ops nor
resources.
- `1 async op` is now `An async op`
- If ops and resources leak, we show both (rather than op leaks masking
resources)
Follow-on to https://github.com/denoland/deno/pull/22226
Splitting the sleep and interval ops allows us to detect an interval
timer. We also remove the use of the `op_async_void_deferred` call.
A future PR will be able to split the op sanitizer messages for timers
and intervals.
This changes the lockfile to not store JSR specifiers in the "remote"
section. Instead a single JSR integrity is stored per package in the
lockfile, which is a hash of the version's `x.x.x_meta.json` file, which
contains hashes for every file in the package. The hashes in this file
are then compared against when loading.
Additionally, when using `{ "vendor": true }` in a deno.json, the files
can be modified without causing lockfile errors—the checksum is only
checked when copying into the vendor folder and not afterwards
(eventually we should add this behaviour for non-jsr specifiers as
well). As part of this change, the `vendor` folder creation is not
always automatic in the LSP and running an explicit cache command is
necessary. The code required to track checksums in the LSP would have
been too complex for this PR, so that all goes through deno_graph now.
The vendoring is still automatic when running from the CLI.
Note: tests are not the only part of the codebase that uses `std`. Other
parts, like `tools/`, do too. So, it could be argued that this is a
little misleading. Either way, I'm doing this as discussed with
@mmastrac.
This introduces the `denort` binary - a slim version of deno without
tooling. The binary is used as the default for `deno compile`.
Improves `deno compile` final size by ~2.5x (141 MB -> 61 MB) on Linux
x86_64.
This implementation heavily depends on there being a lockfile, meaning
JSR specifiers will always diagnose as uncached unless it's there. In
practice this affects cases where a `deno.json` isn't being used. Our
NPM specifier support isn't subject to this.
The reason for this is that the version constraint solving code is
currently buried in `deno_graph` and not usable from the LSP, so the
only way to reuse that logic is the solved-version map in the lockfile's
`packages.specifiers`.
This looks like a massive PR, but it's only a move from cli/tests ->
tests, and updates of relative paths for files.
This is the first step towards aggregate all of the integration test
files under tests/, which will lead to a set of integration tests that
can run without the CLI binary being built.
While we could leave these tests under `cli`, it would require us to
keep a more complex directory structure for the various test runners. In
addition, we have a lot of complexity to ignore various test files in
the `cli` project itself (cargo publish exclusion rules, autotests =
false, etc).
And finally, the `tests/` folder will eventually house the `test_ffi`,
`test_napi` and other testing code, reducing the size of the root repo
directory.
For easier review, the extremely large and noisy "move" is in the first
commit (with no changes -- just a move), while the remainder of the
changes to actual files is in the second commit.
Removes the `FileFetcher`'s internal cache because I don't believe it's
necessary (we already cache this kind of stuff in places like deno_graph
or config files in different places). Removing it fixes this bug because
this functionality was already implemented in deno_graph and lowers
memory usage of the CLI a little bit.
This PR separates integration tests from CLI tests into a new project
named `cli_tests`. This is a prerequisite for an integration test runner
that can work with either the CLI binary in the current project, or one
that is built ahead of time.
## Background
Rust does not have the concept of artifact dependencies yet
(https://github.com/rust-lang/cargo/issues/9096). Because of this, the
only way we can ensure a binary is built before running associated tests
is by hanging tests off the crate with the binary itself.
Unfortunately this means that to run those tests, you _must_ build the
binary and in the case of the deno executable that might be a 10 minute
wait in release mode.
## Implementation
To allow for tests to run with and without the requirement that the
binary is up-to-date, we split the integration tests into a project of
their own. As these tests would not require the binary to build itself
before being run as-is, we add a stub integration `[[test]]` target in
the `cli` project that invokes these tests using `cargo test`.
The stub test runner we add has `harness = false` so that we can get
access to a `main` function. This `main` function's sole job is to
`execvp` the command `cargo test -p deno_cli`, effectively "calling"
another cargo target.
This ensures that the deno executable is always correctly rebuilt before
running the stub test runner from `cli`, and gets us closer to be able
to run the entire integration test suite on arbitrary deno executables
(and therefore split the build into multiple phases).
The new `cli_tests` project lives within `cli` to avoid a large PR. In
later PRs, the test data will be split from the `cli` project. As there
are a few thousand files, it'll be better to do this as a completely
separate PR to avoid noise.