In https://github.com/denoland/deno/pull/23955 we changed the sqlite db
journal mode to WAL. This causes issues when someone is running an old
version of Deno using TRUNCATE and a new version because the two fight
against each other.
The same issue in two different places - doing blocking FS work in an
async task, limiting the amount of work that happens concurrently.
- When setting up node_modules, where we try to set up entries
concurrently but were blocking other tasks from actually running.
- When loading package info from the npm registry file cache, loading
and deserializing is expensive and prevents concurrency. This was
especially noticeable when loading an npm resolution snapshot from a
lockfile (`snapshot_from_lockfile` in `deno_npm`).
Installing deps in `deno-docs`:
```
❯ hyperfine -i -p 'rm -rf node_modules/' '../d7/deno-main i' '../d7/target/release/deno i'
Benchmark 1: ../d7/deno-main i
Time (mean ± σ): 2.193 s ± 0.027 s [User: 0.589 s, System: 1.033 s]
Range (min … max): 2.151 s … 2.242 s 10 runs
Benchmark 2: ../d7/target/release/deno i
Time (mean ± σ): 1.597 s ± 0.021 s [User: 0.977 s, System: 1.337 s]
Range (min … max): 1.550 s … 1.627 s 10 runs
Summary
../d7/target/release/deno i ran
1.37 ± 0.02 times faster than ../d7/deno-main i
```
Caching `npm:@11ty/eleventy`:
```
❯ hyperfine -i -p 'rm -rf node_modules/' --warmup 5 '../../d7/deno-main cache npm:@11ty/eleventy' '../../d7/target/release/deno cache npm:@11ty/eleventy'
Benchmark 1: ../../d7/deno-main cache npm:@11ty/eleventy
Time (mean ± σ): 129.9 ms ± 2.2 ms [User: 27.5 ms, System: 101.3 ms]
Range (min … max): 127.5 ms … 135.8 ms 10 runs
Benchmark 2: ../../d7/target/release/deno cache npm:@11ty/eleventy
Time (mean ± σ): 100.6 ms ± 1.3 ms [User: 38.8 ms, System: 233.8 ms]
Range (min … max): 99.3 ms … 103.2 ms 10 runs
Summary
../../d7/target/release/deno cache npm:@11ty/eleventy ran
1.29 ± 0.03 times faster than ../../d7/deno-main cache npm:@11ty/eleventy
```
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
Hard linking (`linkat`) is ridiculously slow on mac. `copyfile` is
better, but what's even faster is `clonefile`. It doesn't have the space
savings that comes with hardlinking, but the performance difference is
worth it imo.
```
❯ hyperfine -i -p 'rm -rf node_modules/' '../../d7/target/release/deno cache npm:@11ty/eleventy' 'deno cache npm:@11ty/eleventy'
Benchmark 1: ../../d7/target/release/deno cache npm:@11ty/eleventy
Time (mean ± σ): 115.4 ms ± 1.2 ms [User: 27.2 ms, System: 87.3 ms]
Range (min … max): 113.7 ms … 117.5 ms 10 runs
Benchmark 2: deno cache npm:@11ty/eleventy
Time (mean ± σ): 619.3 ms ± 6.4 ms [User: 34.3 ms, System: 575.6 ms]
Range (min … max): 612.2 ms … 633.3 ms 10 runs
Summary
../../d7/target/release/deno cache npm:@11ty/eleventy ran
5.37 ± 0.08 times faster than deno cache npm:@11ty/eleventy
```
# Summary
This PR resolves about the issue.
fixes #10810
And the formerly context is in the PR.
#22582
Here is an expected behaviour example with this change.
- 🦕.test.ts
```ts
import { assertEquals } from "https://deno.land/std@0.215.0/assert/mod.ts";
Deno.test("example test", () => {
assertEquals("🍋", "🦕");
});
```
The mixed `number | bigint` representation was useful optimization for
pointers. Now, pointers are represented as V8 externals. As part of the
FFI stabilization effort we want to make `bigint` the only
representation for `u64` and `i64`.
BigInt representation performance is almost on par with mixed
representation with the added benefit that its less confusing and users
don't need manual checks and conversions for doing operations on the
value.
```
cpu: AMD Ryzen 5 7530U with Radeon Graphics
runtime: deno 1.43.6+92a8d09 (x86_64-unknown-linux-gnu)
file:///home/divy/gh/ffi/main.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
-------------------------------------------------------------------------- -----------------------------
nop 4.01 ns/iter 249,533,690.5 (3.97 ns … 10.8 ns) 3.97 ns 4.36 ns 9.03 ns
ret bigint 7.74 ns/iter 129,127,186.8 (7.72 ns … 10.46 ns) 7.72 ns 8.11 ns 8.82 ns
ret i32 7.81 ns/iter 128,087,100.5 (7.77 ns … 12.72 ns) 7.78 ns 8.57 ns 9.75 ns
ret bigint (add op) 15.02 ns/iter 66,588,253.2 (14.64 ns … 24.99 ns) 14.76 ns 19.13 ns 19.44 ns
ret i32 (add op) 12.02 ns/iter 83,209,131.8 (11.95 ns … 18.18 ns) 11.98 ns 13.11 ns 14.5 ns
```
Enhanced warning message for --env flag with run and eval subcommands.
The commit is specifically made to address issue #23674 by improving the
warning messages that appear when using the --env flag with run or eval
subcommands in the following scenarios:
1. Missing environment file.
2. Incorrect syntax in the environment file content.
**Changes made**
- Distinguishes between cases of missing environment file and wrong
syntax in the environment file content.
- Shows a concise warning message to convey the case/issue occurred.
**Code changes & enhancements**
- Implemented a match statement to handle different types of errors
received while getting and parsing the file content to display a concise
warning message, rather than simple error check and then displaying the
same warning message for whatever the type of error is.
- Updated the related existing tests to reflect the new warning
messages.
- Added two test cases to cover the wrong environment file content
syntax with both run and eval subcommands.
**Impact**
The use of --env flag with both run/eval would be more user-friendly as
it gives a precise description of what is not right when using
incorrectly.
If you could give it a look, @dsherret , I appreciate your feedback on
these changes.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This commit adds initial support for ".npmrc" files.
Currently we only discover ".npmrc" files next to "package.json" files
and discovering these files in user home dir is left for a follow up.
This pass supports "_authToken" and "_auth" configuration
for providing authentication.
LSP support has been left for a follow up PR.
Towards https://github.com/denoland/deno/issues/16105
Fixes #23571.
Previously, we required a `deno.json` to be present (or the `--lock`
flag) in order for us to resolve a `deno.lock` file. This meant that if
you were using deno in an npm-first project deno wouldn't use a
lockfile.
Additionally, while I was fixing that, I discovered there were a couple
bugs keeping the future `install` command from using a lockfile.
With this PR, `install` will actually resolve the lockfile (or create
one if not present), and update it if it's not up-to-date. This also
speeds up `deno install`, as we can use the lockfile to skip work during
npm resolution.
This PR removes the use of the custom `utc_now` function in favor of the
`chrono` implementation. It resolves #22864.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
While investigating poor cold start performance on my GCP VM (32 cores,
130GB SSD), I found that writing to the various sqlite databases in
DENO_DIR was quite slow. The slowness seems to primarily be caused by
excessive latency from a number of `fsync()` calls.
The performance difference is best demonstrated by deleting the sqlite
databases from DENO_DIR while leaving the downloaded sources in place.
The benchmark (see notes below):
```
piscisaureus@bert-us:~/erofs/source$ export DENO_DIR=./.deno
piscisaureus@bert-us:~/erofs/source$ hyperfine --warmup 3 \
--prepare "rm -rf .deno/*_v1*" \
"deno run -A --cached-only demo.ts" \
"eatmydata deno run -A --cached-only demo.ts" \
"~/deno/target/release/deno run -A --cached-only demo.ts"
Benchmark 1: deno run -A --cached-only demo.ts
Time (mean ± σ): 1.174 s ± 0.037 s [User: 0.153 s, System: 0.184 s]
Range (min … max): 1.104 s … 1.212 s 10 runs
Benchmark 2: eatmydata deno run -A --cached-only demo.ts
Time (mean ± σ): 265.5 ms ± 3.6 ms [User: 138.5 ms, System: 135.1 ms]
Range (min … max): 260.6 ms … 271.2 ms 11 runs
Benchmark 3: ~/deno/target/release/deno run -A --cached-only demo.ts
Time (mean ± σ): 226.2 ms ± 9.2 ms [User: 136.7 ms, System: 93.3 ms]
Range (min … max): 218.8 ms … 247.1 ms 13 runs
Summary
~/deno/target/release/deno run -A --cached-only demo.ts ran
1.17 ± 0.05 times faster than eatmydata deno run -A --cached-only demo.ts
5.19 ± 0.27 times faster than deno run -A --cached-only demo.ts
```
Notes:
* Benchmark 1: unmodified Deno 1.43.6
* Benchmark 2: unmodified Deno 1.43.6 wrapped with `eatmydata` (which is
a tool to neuter `fsync()` calls)
* Benchmark 3: this PR applied on top of Deno 1.43.6
The script that got benchmarked:
```typescript
// demo.ts
import * as express from "npm:express@4.16.3";
import * as postgres from "https://deno.land/x/postgres/mod.ts";
let _dummy = [express, postgres]; // Force use of imports.
console.log("hello world");
```
This is a primordialization effort to improve resistance against users
tampering with the global `Object` prototype.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
By default, uses a 60 second timeout, backing off 2x each time (can be
overridden using the hidden `DENO_SLOW_TEST_TIMEOUT` which we implement
only really for spec testing.
```
Deno.test(async function test() {
await new Promise(r => setTimeout(r, 130_000));
});
```
```
$ target/debug/deno test /tmp/test_slow.ts
Check file:///tmp/test_slow.ts
running 1 test from ../../../../../../tmp/test_slow.ts
test ...'test' is running very slowly (1m0s)
'test' is running very slowly (2m0s)
ok (2m10s)
ok | 1 passed | 0 failed (2m10s)
```
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This brings in [`runtimelib`](https://github.com/runtimed/runtimed) to
use:
## Fully typed structs for Jupyter Messages
```rust
let msg = connection.read().await?;
self
.send_iopub(
runtimelib::Status::busy().as_child_of(msg),
)
.await?;
```
## Jupyter paths
Jupyter paths are implemented in Rust, allowing the Deno kernel to be
installed completely via Deno without a requirement on Python or
Jupyter. Deno users will be able to install and use the kernel with just
VS Code or other editors that support Jupyter.
```rust
pub fn status() -> Result<(), AnyError> {
let user_data_dir = user_data_dir()?;
let kernel_spec_dir_path = user_data_dir.join("kernels").join("deno");
let kernel_spec_path = kernel_spec_dir_path.join("kernel.json");
if kernel_spec_path.exists() {
log::info!("✅ Deno kernel already installed");
Ok(())
} else {
log::warn!("ℹ️ Deno kernel is not yet installed, run `deno jupyter --install` to set it up");
Ok(())
}
}
```
Closes https://github.com/denoland/deno/issues/21619
https://github.com/denoland/deno/pull/23838 might accidentally disable
import assertions support because of V8 12.6 unshipping it, but we want
import assertions to be supported until Deno 2.
Construct a new module graph container for workers instead of sharing it
with the main worker.
Fixes #17248
Fixes #23461
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
Related: https://github.com/denoland/eszip/pull/181
eszip < v0.69.0 hashes all its contents to ensure data integrity. This
feature is not necessary in Deno CLI as the binary integrity guarantee
is deemed an external responsibility (ie it is to be assumed that, if
necessary, the compiled binary will be checksumed externally prior to
being executed).
eszip >= v0.69.0 no longer performs this checksum by default. This
reduces the cold-start time of the compiled binaries, proportionally to
their size.
VScode will typically send a `textDocument/semanticTokens/full` request
followed by `textDocument/semanticTokens/range`, and occassionally
request semantic tokens even when we know nothing has changed. Semantic
tokens also get refreshed on each change. Computing semantic tokens is
relatively heavy in TSC, so we should avoid it as much as possible.
Caches the semantic tokens for open documents, to avoid making TSC do
unnecessary work. Results in a noticeable improvement in local
benchmarking
before:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 383ms)
- Big Document/Several Edits
(5 runs, mean: 1079ms)
- Find/Replace
(10 runs, mean: 59ms)
- Code Lens
(10 runs, mean: 440ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 9921ms)
<- End benchmarking lsp
```
after:
```
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 395ms)
- Big Document/Several Edits
(5 runs, mean: 1024ms)
- Find/Replace
(10 runs, mean: 56ms)
- Code Lens
(10 runs, mean: 438ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 8927ms)
<- End benchmarking lsp
```
This PR directly addresses the issue raised in #23282 where Deno panics
if `deno coverage` is called with `--include` regex that returns no
matches.
I've opted not to change the return value of `collect_summary` for
simplicity and return an empty `HashMap` instead
Moves sloppy import resolution from the loader to the resolver.
Also adds some test helper functions to make the lsp tests less verbose
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
1. Generally we should prefer to use the `log` crate.
2. I very often accidentally commit `eprintln`s.
When we should use `println` or `eprintln`, it's not too bad to be a bit
more verbose and ignore the lint rule.
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.43.2
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.43.2 && git checkout -b forward_v1.43.2 upstream/forward_v1.43.2
```
Don't need this PR? Close it.
cc @nathanwhit
Co-authored-by: nathanwhit <nathanwhit@users.noreply.github.com>
Co-authored-by: Nathan Whitaker <nathan@deno.com>
This PR implements the changes we plan to make to `deno install` in deno
2.0.
- `deno install` without arguments caches dependencies from
`package.json` / `deno.json` and sets up the `node_modules` folder
- `deno install <pkg>` adds the package to the config file (either
`package.json` or `deno.json`), i.e. it aliases `deno add`
- `deno add` can also add deps to `package.json` (this is gated behind
`DENO_FUTURE` due to uncertainty around handling projects with both
`deno.json` and `package.json`)
- `deno install -g <bin>` installs a package as a globally available
binary (the same as `deno install <bin>` in 1.0)
---------
Co-authored-by: Nathan Whitaker <nathan@deno.com>
Fixes the `Debug Failure` errors described in
https://github.com/denoland/deno/issues/23643#issuecomment-2094552765 .
The issue here was that we were passing diagnostic codes as strings but
TSC expects the codes to be numbers. This resulted in some quick fixes
not working (as illustrated by the test added here which fails before
this PR).
The first commit is the actual fix. The rest are just test related.
Fixes #23643.
We weren't catching the cancellation exception thrown by TSC on the JS
side, so the rust side was catching this exception and then attempting
to print out the exception via `toString`. That last bit resulted in a
cryptic `[object Object]` showing up in the logs like so:
```
Error during TS request "getCompletionEntryDetails":
[object Object]
```
I'm not 100% sure how we weren't seeing this in the past. My guess is
that #23409 and the subsequent PR to improve the exception catching and
logging surfaced this, but I'm still not quite clear on it.
My initial fix here returned `null` to rust when a server request was
cancelled, but this resulted in a deserialization error when we
attempted to deserialize that into the expected response type. So now,
as soon as the request's cancellation token signals we'll stop waiting
for a response and return an error (which will get swallowed as the LSP
request is being cancelled).
I was a bit surprised to find that [this
branch](0c671c9792/cli/lsp/tsc.rs (L1093))
actually executes sometimes, I believe due to the fact that aborting a
future may not [immediately stop its
execution](https://docs.rs/futures/latest/futures/stream/struct.AbortHandle.html#method.abort).
This makes `create_runtime_snapshot` more useful for embedders who add
their own extension(s) to the runtime in build scripts.
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Before this PR, there would just be an uninformative "Error occurred"
message, after this PR you'll get a stack trace in the LSP output window
like this:
```text
Error during TS request "$getSupportedCodeFixes":
Error: i threw an exception
at serverRequest (ext:deno_tsc/99_main_compiler.js:1089:11)
```
By default, `deno serve` will assign port 8000 (like `Deno.serve`).
Users may choose a different port using `--port`.
`deno serve /tmp/file.ts`
`server.ts`:
```ts
export default {
fetch(req) {
return new Response("hello world!\n");
},
};
```
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
This commit changes the workspace support to provide all workspace
members to be available as imports based on their names and versions.
Closes https://github.com/denoland/deno/issues/23343
<!--
Before submitting a PR, please read
https://docs.deno.com/runtime/manual/references/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
This PR wires up a new `jsxPrecompileSkipElements` option in
`compilerOptions` that can be used to exempt a list of elements from
being precompiled with the `precompile` JSX transform.
The actual handling of `$projectChanged` is quick, but JS requests are
not. The cleared caches only get repopulated on the next actual request,
so just batch the change notification in with the next actual request.
No significant difference in benchmarks on my machine, but this speeds
up `did_change` handling and reduces our total number of JS requests (in
addition to coalescing multiple JS change notifs into one).