Rather than disallowing `ext:` resolution, clear the module map after
initializing extensions so extension modules are anonymized. This
operation is explicitly called in `deno_runtime`. Re-inject `node:`
specifiers into the module map after doing this.
Fixes #17717.
This adds support for the lockfile and node_modules directory to the
lsp.
In the case of the node_modules directory, it is only enabled when
explicitly opted into via `"nodeModulesDir": true` in the configuration
file. This is to reduce the language server automatically modifying the
node_modules directory when the user doesn't want it to.
Closes #16510
Closes #16373
Note: If the package information has already been cached, then this
requires running with `--reload` or for the registry information to be
fetched some other way (ex. the cache busting).
Closes #15544
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Partially supersedes #19016.
This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes
the requirement for `spawn` tasks to be `Send` given our single-threaded
executor.
While we don't need to technically do anything w/`spawn_blocking`, this
allows us to have a single `JoinHandle` type that works for both cases,
and allows us to more easily experiment with alternative
`spawn_blocking` implementations that do not require tokio (ie: rayon).
Async ops (+~35%):
Before:
```
time 1310 ms rate 763358
time 1267 ms rate 789265
time 1259 ms rate 794281
time 1266 ms rate 789889
```
After:
```
time 956 ms rate 1046025
time 954 ms rate 1048218
time 924 ms rate 1082251
time 920 ms rate 1086956
```
HTTP serve (+~4.4%):
Before:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 68.78us 19.77us 1.43ms 86.84%
Req/Sec 68.78k 5.00k 73.84k 91.58%
1381833 requests in 10.10s, 167.36MB read
Requests/sec: 136823.29
Transfer/sec: 16.57MB
```
After:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 63.12us 17.43us 1.11ms 85.13%
Req/Sec 71.82k 3.71k 77.02k 79.21%
1443195 requests in 10.10s, 174.79MB read
Requests/sec: 142921.99
Transfer/sec: 17.31MB
```
Suggested-By: alice@ryhl.io
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Adds a `deno.preloadLimit` option (ex. `"deno.preloadLimit": 2000`)
which specifies how many file entries to traverse on the file system
when the lsp loads or its configuration changes.
Closes #18955
This is the initial support for npm and node specifiers in `deno
compile`. The npm packages are included in the binary and read from it via
a virtual file system. This also supports the `--node-modules-dir` flag,
dependencies specified in a package.json, and npm binary commands (ex.
`deno compile --unstable npm:cowsay`)
Closes #16632
This removes `ProcState` and replaces it with a new `CliFactory` which
initializes our "service structs" on demand. This isn't a performance
improvement at the moment for `deno run`, but might unlock performance
improvements in the future.
We can make `NodePermissions` rely on interior mutability (which the
`PermissionsContainer` is already doing) in order to not have to clone
everything all the time. This also reduces the chance of an accidental
`borrow` while `borrrow_mut`.
This is just a straight refactor and I didn't do any cleanup in
ext/node. After this PR we can start to clean it up and make things
private that don't need to be public anymore.
1. Breaks up functionality within `ProcState` into several other structs
to break out the responsibilities (`ProcState` is only a data struct
now).
2. Moves towards being able to inject dependencies more easily and have
functionality only require what it needs.
3. Exposes `Arc<T>` around the "service structs" instead of it being
embedded within them. The idea behind embedding them was to reduce the
verbosity of needing to pass around `Arc<...>`, but I don't think it was
exactly working and as we move more of these structs to be more
injectable I don't think the extra verbosity will be a big deal.
Stores the test/bench functions in rust op state during registration.
The functions are wrapped in JS first so that they return a directly
convertible `TestResult`/`BenchResult`. Test steps are still mostly
handled in JS since they are pretty much invoked by the user. Allows
removing a bunch of infrastructure for communicating between JS and
rust. Allows using rust utilities for things like shuffling tests
(`Vec::shuffle`). We can progressively move op and resource sanitization
to rust as well.
Fixes #17122.
Fixes #17312.
- bump deps: the newest `lazy-regex` need newer `oncecell` and
`regex`
- reduce `unwrap`
- remove dep `lazy_static`
- make more regex cached
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This is a follow-on to the earlier work in reducing string copies,
mainly focused on ensuring that ASCII strings are easy to provide to the
JS runtime.
While we are replacing a 16-byte reference in a number of places with a
24-byte structure (measured via `std::mem::size_of`), the reduction in
copies wins out over the additional size of the arguments passed into
functions.
Benchmarking shows approximately the same if not slightly less wallclock
time/instructions retired, but I believe this continues to open up
further refactoring opportunities.
1. Fixes a cosmetic issue in the repl where it would display lsp warning
messages.
2. Lazily loads dependencies from the package.json on use.
3. Supports using bare specifiers from package.json in the REPL.
Closes #17929
Closes #18494
This will make it a bit harder to accidentally use a client url in the
wrong place. I don't fully understand why we do this mapping, but this
will help prevent bugs like #18373
Closes #18374
Reduce the number of copies and allocations of script code by carrying
around ownership/reference information from creation time.
As an advantage, this allows us to maintain the identity of `&'static
str`-based scripts and use v8's external 1-byte strings (to avoid
incorrectly passing non-ASCII strings, debug `assert!`s gate all string
reference paths).
Benchmark results:
Perf improvements -- ~0.1 - 0.2ms faster, but should reduce garbage
w/external strings and reduces data copies overall. May also unlock some
more interesting optimizations in the future.
This requires adding some generics to functions, but manual
monomorphization has been applied (outer/inner function) to avoid code
bloat.