This brings in [`runtimelib`](https://github.com/runtimed/runtimed) to
use:
## Fully typed structs for Jupyter Messages
```rust
let msg = connection.read().await?;
self
.send_iopub(
runtimelib::Status::busy().as_child_of(msg),
)
.await?;
```
## Jupyter paths
Jupyter paths are implemented in Rust, allowing the Deno kernel to be
installed completely via Deno without a requirement on Python or
Jupyter. Deno users will be able to install and use the kernel with just
VS Code or other editors that support Jupyter.
```rust
pub fn status() -> Result<(), AnyError> {
let user_data_dir = user_data_dir()?;
let kernel_spec_dir_path = user_data_dir.join("kernels").join("deno");
let kernel_spec_path = kernel_spec_dir_path.join("kernel.json");
if kernel_spec_path.exists() {
log::info!("✅ Deno kernel already installed");
Ok(())
} else {
log::warn!("ℹ️ Deno kernel is not yet installed, run `deno jupyter --install` to set it up");
Ok(())
}
}
```
Closes https://github.com/denoland/deno/issues/21619
The stderr stream from the LSP is consumed by a separate thread, so it
may not have processed the part we care about yet. Instead, wait until
you see the measure for the request you care about.
VScode will typically send a `textDocument/semanticTokens/full` request
followed by `textDocument/semanticTokens/range`, and occassionally
request semantic tokens even when we know nothing has changed. Semantic
tokens also get refreshed on each change. Computing semantic tokens is
relatively heavy in TSC, so we should avoid it as much as possible.
Caches the semantic tokens for open documents, to avoid making TSC do
unnecessary work. Results in a noticeable improvement in local
benchmarking
before:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 383ms)
- Big Document/Several Edits
(5 runs, mean: 1079ms)
- Find/Replace
(10 runs, mean: 59ms)
- Code Lens
(10 runs, mean: 440ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 9921ms)
<- End benchmarking lsp
```
after:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 395ms)
- Big Document/Several Edits
(5 runs, mean: 1024ms)
- Find/Replace
(10 runs, mean: 56ms)
- Code Lens
(10 runs, mean: 438ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 8927ms)
<- End benchmarking lsp
```
This PR directly addresses the issue raised in #23282 where Deno panics
if `deno coverage` is called with `--include` regex that returns no
matches.
I've opted not to change the return value of `collect_summary` for
simplicity and return an empty `HashMap` instead
Precursor to #23236
This implements the SNI features, but uses private symbols to avoid
exposing the functionality at this time. Note that to properly test this
feature, we need to add a way for `connectTls` to specify a hostname.
This is something that should be pushed into that API at a later time as
well.
```ts
Deno.test(
{ permissions: { net: true, read: true } },
async function listenResolver() {
let sniRequests = [];
const listener = Deno.listenTls({
hostname: "localhost",
port: 0,
[resolverSymbol]: (sni: string) => {
sniRequests.push(sni);
return {
cert,
key,
};
},
});
{
const conn = await Deno.connectTls({
hostname: "localhost",
[serverNameSymbol]: "server-1",
port: listener.addr.port,
});
const [_handshake, serverConn] = await Promise.all([
conn.handshake(),
listener.accept(),
]);
conn.close();
serverConn.close();
}
{
const conn = await Deno.connectTls({
hostname: "localhost",
[serverNameSymbol]: "server-2",
port: listener.addr.port,
});
const [_handshake, serverConn] = await Promise.all([
conn.handshake(),
listener.accept(),
]);
conn.close();
serverConn.close();
}
assertEquals(sniRequests, ["server-1", "server-2"]);
listener.close();
},
);
```
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Moves sloppy import resolution from the loader to the resolver.
Also adds some test helper functions to make the lsp tests less verbose
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
1. Generally we should prefer to use the `log` crate.
2. I very often accidentally commit `eprintln`s.
When we should use `println` or `eprintln`, it's not too bad to be a bit
more verbose and ignore the lint rule.
Fixes the `Debug Failure` errors described in
https://github.com/denoland/deno/issues/23643#issuecomment-2094552765 .
The issue here was that we were passing diagnostic codes as strings but
TSC expects the codes to be numbers. This resulted in some quick fixes
not working (as illustrated by the test added here which fails before
this PR).
The first commit is the actual fix. The rest are just test related.
A bunch of small things, mostly around timing and making sure the
jupyter kernel is actually running and ready to respond to requests. I
reproduced the flakiness by running a script to run a bunch of instances
of the test in parallel, where I could get failures consistently. After
this PR, I can't reproduce the flakiness locally which hopefully means
that applies to CI as well
This PR adds private `[REF]()` and `[UNREF]()` methods to Stdin class,
and call them from Node.js polyfill layer (`TTY` class). This enables
`process.stdin.unref()` and `process.stdin.ref()` for the case when
stdin is terminal.
closes #21796
This commit adds a "private npm registry" to the test server. This
registry requires to send an appropriate Authorization header.
Towards https://github.com/denoland/deno/issues/16105
This commit changes the workspace support to provide all workspace
members to be available as imports based on their names and versions.
Closes https://github.com/denoland/deno/issues/23343
<!--
Before submitting a PR, please read
https://docs.deno.com/runtime/manual/references/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
This PR wires up a new `jsxPrecompileSkipElements` option in
`compilerOptions` that can be used to exempt a list of elements from
being precompiled with the `precompile` JSX transform.
The actual handling of `$projectChanged` is quick, but JS requests are
not. The cleared caches only get repopulated on the next actual request,
so just batch the change notification in with the next actual request.
No significant difference in benchmarks on my machine, but this speeds
up `did_change` handling and reduces our total number of JS requests (in
addition to coalescing multiple JS change notifs into one).
Embedders may have special requirements around file opening, so we add a
new `check_open` permission check that is called as part of the file
open process.
This PR enables V8 code cache for ES modules and for `require` scripts
through `op_eval_context`. Code cache artifacts are transparently stored
and fetched using sqlite db and are passed to V8. `--no-code-cache` can
be used to disable.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Currently we evict a lot of the caches on the JS side of things on every
request, namely script versions, script file names, and compiler
settings (as of #23283, it's not quite every request but it's still
unnecessarily often).
This PR reports changes to the JS side, so that it can evict exactly the
caches that it needs too. We might want to do some batching in the
future so as not to do 1 request per change.
This functionality was broken. The series of events was:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents.
3. Clear the npm snapshot.
4. Redo npm resolution with the new specifiers (~500ms).
What this now does:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents and take
into account the specifiers from the lockfile.
3. Do not redo resolution (~1ms).
When `DENO_FUTURE=1` env var is present, then BYONM
("bring your own node_modules") is enabled by default.
That means that is there's a `package.json` present, users
are expected to explicitly install dependencies from that file.
Towards https://github.com/denoland/deno/issues/23151
The permission prompt doesn't wait for quiescent input, so someone
pasting a large text file into the console may end up losing the prompt.
We enforce a minimum human delay and wait for a 100ms quiescent period
before we write and accept prompt input to avoid this problem.
This does require adding a human delay in all prompt tests, but that's
pretty straightforward. I rewrote the locked stdout/stderr test while I
was in here.
There's a TOCTOU issue that can happen when selecting unused ports for
the server to use (we get assigned an unused port by the OS, and between
then and when the server actually binds to the port another test steals
it). Improve this by checking if the server existed soon after setup,
and if so we retry starting it. Client connection can also fail
spuriously (in local testing) so added a retry mechanism.
This also fixes a hang, where if the server exited (almost always due to
the issue described above) before we connected to it, attempting to
connect our client ZMQ sockets to it would just hang. To resolve this, I
added a timeout so we can't wait forever.
This PR introduces the ability to exclude certain paths from the file watcher
in Deno. This is particularly useful when running scripts in watch mode,
as it allows developers to prevent unnecessary restarts when changes are
made to files that do not affect the running script, or when executing
scripts that generate new files which results in an infinite restart
loop.
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
In preparation for upcoming changes to `deno install` in Deno 2.
If `-g` or `--global` flag is not provided a warning will be emitted:
```
⚠️ `deno install` behavior will change in Deno 2. To preserve the current behavior use `-g` or `--global` flag.
```
The same will happen for `deno uninstall` - unless `-g`/`--global` flag
is provided
a warning will be emitted.
Towards https://github.com/denoland/deno/issues/23062
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: David Sherret <dsherret@users.noreply.github.com>
This commit changes "deno init" subcommand to use "jsr:" specifier for
standard library "assert" module. It is unversioned, but we will change
it to `@^1` once `@std/assert` release version 1.0.
This allows us to start decoupling `deno` and `deno_std` release. The
release scripts have been updated to take that into account.
Before this PR, we didn't have any integration tests set up for the
`jupyter` subcommand.
This PR adds a basic jupyter client and helpers for writing integration
tests for the jupyter kernel. A lot of the code here is boilerplate,
mainly around the message format for jupyter.
This also adds a few basic integration tests, most notably for
requesting execution of a snippet of code and getting the correct
results.
This patch gets JUnit reporter to output more detailed information for
test steps (subtests).
## Issue with previous implementation
In the previous implementation, the test hierarchy was represented using
several XML tags like the following:
- `<testsuites>` corresponds to the entire test (one execution of `deno
test` has exactly one `<testsuites>` tag)
- `<testsuite>` corresponds to one file, such as `main_test.ts`
- `<testcase>` corresponds to one `Deno.test(...)`
- `<property>` corresponds to one `t.step(...)`
This structure describes the test layers but one problem is that
`<property>` tag is used for any use cases so some tools that can ingest
a JUnit XML file might not be able to interpret `<property>` as
subtests.
## How other tools address it
Some of the testing frameworks in the ecosystem address this issue by
fitting subtests into the `<testcase>` layer. For instance, take a look
at the following Go test file:
```go
package main_test
import "testing"
func TestMain(t *testing.T) {
t.Run("child 1", func(t *testing.T) {
// OK
})
t.Run("child 2", func(t *testing.T) {
// Error
t.Fatal("error")
})
}
```
Running [gotestsum], we can get the output like this:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<testsuites tests="3" failures="2" errors="0" time="1.013694">
<testsuite tests="3" failures="2" time="0.510000" name="example/gosumtest" timestamp="2024-03-11T12:26:39+09:00">
<properties>
<property name="go.version" value="go1.22.1 darwin/arm64"></property>
</properties>
<testcase classname="example/gosumtest" name="TestMain/child_2" time="0.000000">
<failure message="Failed" type="">=== RUN TestMain/child_2
 main_test.go:12: error
--- FAIL: TestMain/child_2 (0.00s)
</failure>
</testcase>
<testcase classname="example/gosumtest" name="TestMain" time="0.000000">
<failure message="Failed" type="">=== RUN TestMain
--- FAIL: TestMain (0.00s)
</failure>
</testcase>
<testcase classname="example/gosumtest" name="TestMain/child_1" time="0.000000"></testcase>
</testsuite>
</testsuites>
```
This output shows that nested test cases are squashed into the
`<testcase>` layer by treating them as the same layer as their parent,
`TestMain`. We can still distinguish nested ones by their `name`
attributes that look like `TestMain/<subtest_name>`.
As described in #22795, [vitest] solves the issue in the same way as
[gotestsum].
One downside of this would be that one test failure that happens in a
nested test case will end up being counted multiple times, because not
only the subtest but also its wrapping container(s) are considered to be
failures. In fact, in the [gotestsum] output above, `TestMain/child_2`
failed (which is totally expected) while its parent, `TestMain`, was
also counted as failure. As
https://github.com/denoland/deno/pull/20273#discussion_r1307558757
pointed out, there is a test runner that offers flexibility to prevent
this, but I personally don't think the "duplicate failure count" issue
is a big deal.
## How to fix the issue in this patch
This patch fixes the issue with the same approach as [gotestsum] and
[vitest].
More specifically, nested test cases are put into the `<testcase>` level
and their names are now represented as squashed test names concatenated
by `>` (e.g. `parent 2 > child 1 > grandchild 1`). This change also
allows us to put a detailed error message as `<failure>` tag within the
`<testcase>` tag, which should be handled nicely by third-party tools
supporting JUnit XML.
## Extra fix
Also, file paths embedded into XML outputs are changed from absolute
path to relative path, which is helpful when running the test suites in
several different environments like CI.
Resolves #22795
[gotestsum]: https://github.com/gotestyourself/gotestsum
[vitest]: https://vitest.dev/
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Fixes #19214.
We were using the `idna` crate to implement our polyfill for
`punycode.toASCII` and `punycode.toUnicode`. The `idna` crate is
correct, and adheres to the IDNA2003/2008 spec, but it turns out
`node`'s implementations don't really follow any spec! Instead, node
splits the domain by `'.'` and punycode encodes/decodes each part. This
means that node's implementations will happily work on codepoints that
are disallowed by the IDNA specs, causing the error in #19214.
While fixing this, I went ahead and matched the node behavior on all of
the punycode functions and enabled node's punycode test in our
`node_compat` suite.