Moves sloppy import resolution from the loader to the resolver.
Also adds some test helper functions to make the lsp tests less verbose
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
Fixes the `Debug Failure` errors described in
https://github.com/denoland/deno/issues/23643#issuecomment-2094552765 .
The issue here was that we were passing diagnostic codes as strings but
TSC expects the codes to be numbers. This resulted in some quick fixes
not working (as illustrated by the test added here which fails before
this PR).
The first commit is the actual fix. The rest are just test related.
Fixes #23643.
We weren't catching the cancellation exception thrown by TSC on the JS
side, so the rust side was catching this exception and then attempting
to print out the exception via `toString`. That last bit resulted in a
cryptic `[object Object]` showing up in the logs like so:
```
Error during TS request "getCompletionEntryDetails":
[object Object]
```
I'm not 100% sure how we weren't seeing this in the past. My guess is
that #23409 and the subsequent PR to improve the exception catching and
logging surfaced this, but I'm still not quite clear on it.
My initial fix here returned `null` to rust when a server request was
cancelled, but this resulted in a deserialization error when we
attempted to deserialize that into the expected response type. So now,
as soon as the request's cancellation token signals we'll stop waiting
for a response and return an error (which will get swallowed as the LSP
request is being cancelled).
I was a bit surprised to find that [this
branch](0c671c9792/cli/lsp/tsc.rs (L1093))
actually executes sometimes, I believe due to the fact that aborting a
future may not [immediately stop its
execution](https://docs.rs/futures/latest/futures/stream/struct.AbortHandle.html#method.abort).
Before this PR, there would just be an uninformative "Error occurred"
message, after this PR you'll get a stack trace in the LSP output window
like this:
```text
Error during TS request "$getSupportedCodeFixes":
Error: i threw an exception
at serverRequest (ext:deno_tsc/99_main_compiler.js:1089:11)
```
The actual handling of `$projectChanged` is quick, but JS requests are
not. The cleared caches only get repopulated on the next actual request,
so just batch the change notification in with the next actual request.
No significant difference in benchmarks on my machine, but this speeds
up `did_change` handling and reduces our total number of JS requests (in
addition to coalescing multiple JS change notifs into one).
This PR enables V8 code cache for ES modules and for `require` scripts
through `op_eval_context`. Code cache artifacts are transparently stored
and fetched using sqlite db and are passed to V8. `--no-code-cache` can
be used to disable.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
I'm running into a node resolution bug in the lsp only and while
tracking it down I noticed this one.
Fixed by moving the project version out of `Documents`.
Fixes the regression described in
https://github.com/denoland/deno/pull/23293#issuecomment-2049819724.
This affected jupyter notebooks, as the LSP was passing in already
denormalized specifiers, while the jupyter kernel was not. We need to
denormalize the specifiers to evict the proper keys from our caches.
Currently we evict a lot of the caches on the JS side of things on every
request, namely script versions, script file names, and compiler
settings (as of #23283, it's not quite every request but it's still
unnecessarily often).
This PR reports changes to the JS side, so that it can evict exactly the
caches that it needs too. We might want to do some batching in the
future so as not to do 1 request per change.
Previously we locked the entire `FileSystemDocuments` even for lookups,
causing contention. This was particularly bad because some of the hot
ops (namely `op_resolve`) can end up hitting that lock under contention.
This PR replaces the mutex with synchronization internal to
`FileSystemDocuments` (an `AtomicBool` for the dirty flag, and then a
`DashMap` for the actual documents).
I need to think a bit more about whether or not this introduces any
problematic race conditions.
This functionality was broken. The series of events was:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents.
3. Clear the npm snapshot.
4. Redo npm resolution with the new specifiers (~500ms).
What this now does:
1. Load the npm resolution from the lockfile.
2. Discover only a subset of the specifiers in the documents and take
into account the specifiers from the lockfile.
3. Do not redo resolution (~1ms).
Fixes #23163.
The client-facing warning doesn't provide any value and is super
annoying. We still emit a warning message on the server side for format
errors, which should fulfill the same (less intrusive) purpose.
Was investigating a separate stack overflow (that I've now found in the
node resolution code) and came across this. We should avoid recursion
(this is very old code).
1. Stops `deno publish` using some custom include/exclude behaviour from
other sub commands
2. Takes ancestor directories into account when resolving gitignore
3. Backards compatible change that adds ability to unexclude an exclude
by using a negated glob at a more specific level for all sub commands
(see https://github.com/denoland/deno_config/pull/44).
The diagnostic was incorrect when importing a `.js` file with a
corresponding `.d.ts` file with sloppy imports because it would say to
change the `.js` extension to `.d.ts`, which is incorrect. We might as
well just hide this diagnostic.
This commit adds "deno add" subcommand that has a basic support for
adding "jsr:" packages to "deno.json" file.
This currently doesn't support "npm:" specifiers and specifying version
constraints.
Some `deno_std` tests were failing to print output that was resolved
after the last test finished. In addition, output printed before tests
began would sometimes appear above the "running X tests ..." line, and
sometimes below it depending on timing.
We now guarantee that all output is flushed before and after tests run,
making the output consistent.
Pre-test and post-test output are captured in `------ pre-test output
------` and `------ post-test output ------` blocks to differentiate
them from the regular output blocks.
Here's an example of a test (that is much noisier than normal, but an
example of what the output will look like):
```
Check ./load_unload.ts
------- pre-test output -------
load
----- output end -----
running 1 test from ./load_unload.ts
test ...
------- output -------
test
----- output end -----
test ... ok ([WILDCARD])
------- post-test output -------
unload
----- output end -----
```
As we add tracing to more types of runtime activity, `--trace-ops` is
less useful of a name. `--trace-leaks` better reflects that this feature
traces both ops and timers, and will eventually trace resource opening
as well.
This keeps `--trace-ops` as an alias for `--trace-leaks`, but prints a
warning to the console suggesting migration to `--trace-leaks`.
One test continues to use `--trace-ops` to test the deprecation warning.
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
- Removes the origin call, since all origins are the same for an isolate
(ie: the main module)
- Collects the `TestDescription`s and sends them all at the same time
inside of an Arc, allowing us to (later on) re-use these instead of
cloning.
Needs a follow-up pass to remove all the cloning, but that's a thread
that is pretty long to pull
---------
Signed-off-by: Matt Mastracci <matthew@mastracci.com>
Gets us closer to solving #20707.
Rewrites the `TestEventSender`:
- Allow for explicit creation of multiple streams. This will allow for
one-std{out,err}-per-worker
- All test events are received along with a worker ID, allowing for
eventual, proper parallel threading of test events.
In theory this should open up proper interleaving of test output,
however that is left for a future PR.
I had some plans for a better performing synchronization primitive, but
the inter-thread communication is tricky. This does, however, speed up
the processing of large numbers of tests 15-25% (possibly even more on
100,000+).
Before
```
ok | 1000 passed | 0 failed (32ms)
ok | 10000 passed | 0 failed (276ms)
```
After
```
ok | 1000 passed | 0 failed (25ms)
ok | 10000 passed | 0 failed (230ms)
```
1. Renames zap/fast-check to instead be a `no-slow-types` lint rule.
1. This lint rule is automatically run when doing `deno lint` for
packages (deno.json files with a name, version, and exports field)
1. This lint rules still occurs on publish. It can be skipped by running
with `--no-slow-types`
This changes the lockfile to not store JSR specifiers in the "remote"
section. Instead a single JSR integrity is stored per package in the
lockfile, which is a hash of the version's `x.x.x_meta.json` file, which
contains hashes for every file in the package. The hashes in this file
are then compared against when loading.
Additionally, when using `{ "vendor": true }` in a deno.json, the files
can be modified without causing lockfile errors—the checksum is only
checked when copying into the vendor folder and not afterwards
(eventually we should add this behaviour for non-jsr specifiers as
well). As part of this change, the `vendor` folder creation is not
always automatic in the LSP and running an explicit cache command is
necessary. The code required to track checksums in the LSP would have
been too complex for this PR, so that all goes through deno_graph now.
The vendoring is still automatic when running from the CLI.
This implementation heavily depends on there being a lockfile, meaning
JSR specifiers will always diagnose as uncached unless it's there. In
practice this affects cases where a `deno.json` isn't being used. Our
NPM specifier support isn't subject to this.
The reason for this is that the version constraint solving code is
currently buried in `deno_graph` and not usable from the LSP, so the
only way to reuse that logic is the solved-version map in the lockfile's
`packages.specifiers`.
Step 1 of the Rustification of sanitizers, which unblocks the faster
timers.
This replaces the resource sanitizer with a Rust one, using the new APIs
in deno_core.
This commit removes conditional type-checking of unstable APIs.
Before this commit `deno check` (or any other type-checking command and
the LSP) would error out if there was an unstable API in the code, but not
`--unstable` flag provided.
This situation hinders DX and makes it harder to configure Deno. Failing
during runtime unless `--unstable` flag is provided is enough in this case.
Ensures the Deno process is brought down even when the runtime gets hung
up on something.
Marvin found that the lsp was running without a parent vscode around so
this is maybe/probably related.
We were calling `expand_glob` on our excludes, which is very expensive
and unnecessary because we can pattern match while traversing instead.
1. Doesn't expand "exclude" globs. Instead pattern matches while walking
the directory.
2. Splits up the "include" into base paths and applicable file patterns.
This causes less pattern matching to occur because we're only pattern
matching on patterns that might match and not ones in completely
unrelated directories.
This commit adds a way to connect to the TS compiler host that is run
as part of the "deno lsp" subcommand. This can be done by specifying
"DENO_LSP_INSPECTOR" variable.
---------
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
Adds performance measurements for all ops used by the LSP. Also changes
output of "Language server status" page to include more precise
information.
Current suspicion is that computing "script version" takes a long time
for some users.
Adds an `--unstable-sloppy-imports` flag which supports the
following for `file:` specifiers:
* Allows writing `./mod` in a specifier to do extension probing.
- ex. `import { Example } from "./example"` instead of `import { Example
} from "./example.ts"`
* Allows writing `./routes` to do directory extension probing for files
like `./routes/index.ts`
* Allows writing `./mod.js` for *mod.ts* files.
This functionality is **NOT RECOMMENDED** for general use with Deno:
1. It's not as optimal for perf:
https://marvinh.dev/blog/speeding-up-javascript-ecosystem-part-2/
1. It makes tooling in the ecosystem more complex in order to have to
understand this.
1. The "Deno way" is to be explicit about what you're doing. It's better
in the long run.
1. It doesn't work if published to the Deno registry because doing stuff
like extension probing with remote specifiers would be incredibly slow.
This is instead only recommended to help with migrating existing
projects to Deno. For example, it's very useful for getting CJS projects
written with import/export declaration working in Deno without modifying
module specifiers and for supporting TS ESM projects written with
`./mod.js` specifiers.
This feature will output warnings to guide the user towards correcting
their specifiers. Additionally, quick fixes are provided in the LSP to
update these specifiers:
This PR causes Deno to include more files in the graph based on how a
template literal looks that's provided to a dynamic import:
```ts
const file = await import(`./dir/${expr}`);
```
In this case, it will search the `dir` directory and descendant
directories for any .js/jsx/etc modules and include them in the graph.
To opt out of this behaviour, move the template literal to a separate
line:
```ts
const specifier = `./dir/${expr}`
const file = await import(specifier);
```
This commit changes LSP log names by prefixing them, we now have these
prefixes:
- `lsp.*` - requests coming from the client
- `tsc.request.*` - requests coming from clients that are routed to TSC
- `tsc.op.*` - ops called by the TS host
- `tsc.host.*` - requests that call JavaScript runtime that runs
TypeScript compiler host
Additionall `Performance::mark` was split into `Performance::mark` and
`Performance::mark_with_args` to reduce verbosity of code and logs.
When an old request is obsoleted while the user is typing, the client
will say so to the server and tower-lsp will drop the future associated
with that request.
This wires that up to the ts server by having any request's token be
cancelled when the surrounding state is dropped.
This PR adds a new unstable "bring your own node_modules" (BYONM)
functionality currently behind a `--unstable-byonm` flag (`"unstable":
["byonm"]` in a deno.json).
This enables users to run a separate install command (ex. `npm install`,
`pnpm install`) then run `deno run main.ts` and Deno will respect the
layout of the node_modules directory as setup by the separate install
command. It also works with npm/yarn/pnpm workspaces.
For this PR, the behaviour is opted into by specifying
`--unstable-byonm`/`"unstable": ["byonm"]`, but in the future we may
make this the default behaviour as outlined in
https://github.com/denoland/deno/issues/18967#issuecomment-1761248941
This is an extremely rough initial implementation. Errors are
terrible in this and the LSP requires frequent restarts. Improvements
will be done in follow up PRs.