Adds `buffers` to the `Deno.jupyter.broadcast` API to send binary data
via comms. This affords the ability to send binary data via websockets
to the jupyter widget frontend.
This makes `CliNpmResolver` a trait. The terminology used is:
- **managed** - Deno manages the node_modules folder and does an
auto-install (ex. `ManagedCliNpmResolver`)
- **byonm** - "Bring your own node_modules" (ex. `ByonmCliNpmResolver`,
which is in this PR, but unimplemented at the moment)
Part of #18967
Closes #20535.
# Screenshots
## JSON
<img width="779" alt="image"
src="https://github.com/denoland/deno/assets/836375/668bb1a6-3f76-4b36-974e-cdc6c93f94c3">
## Vegalite
<img width="558" alt="image"
src="https://github.com/denoland/deno/assets/836375/a5e70908-6b87-42d8-85c3-1323ad52a00f">
# Implementation
Instead of going the route of recursively getting all the objects under
`application/.*json` keys, I went with `JSON.stringify`ing in denospace
then parsing it from rust. One of the key benefits of serializing and
deserializing is that non-JSON-able entries will get stripped
automatically. This also keeps the code pretty simple.
In the future we should _only_ do this for `application/.*json` keys.
cc @mmastrac
"Fixes" the exception display issue of #20524 on older versions of
Jupyter that required `evalue` to be truthy. For now, until we can do
proper processing of the `ExceptionDetails` this will make Jupyter
Notebook 6.5.1 and below happy.
This is the alternative "just work now" PR to #20530
This commit adds "deno jupyter" subcommand which
provides a Deno kernel for Jupyter notebooks.
The implementation is mostly based on Deno's REPL and
reuses large parts of it (though there's some clean up that
needs to happen in follow up PRs). Not all functionality of
Jupyter kernel is implemented and some message type
are still not implemented (eg. "inspect_request") but
the kernel is fully working and provides all the capatibilities
that the Deno REPL has; including TypeScript transpilation
and npm packages support.
Closes https://github.com/denoland/deno/issues/13016
---------
Co-authored-by: Adam Powers <apowers@ato.ms>
Co-authored-by: Kyle Kelley <rgbkrk@gmail.com>
### What
Skip writing files from the template if the files already exist in the
project directory.
### Why
When I run deno init in a directory that already has a main.ts, or one
of the other template files, I usually want to initialize a workspace
around a file I've started working in. A hard error in this case seems
counter productive. An informational message about what's being skipped
seems sufficient.
Close #20433
Previously:
```rust
pub struct TestDefinition {
pub id: String,
pub name: String,
pub range: SourceRange,
pub steps: Vec<TestDefinition>,
}
pub struct TestDefinitions {
pub discovered: Vec<TestDefinition>,
pub injected: Vec<lsp_custom::TestData>,
pub script_version: String,
}
```
Now:
```rust
pub struct TestDefinition {
pub id: String,
pub name: String,
pub range: Option<Range>,
pub is_dynamic: bool, // True for 'injected' module, not statically detected but added at runtime.
pub parent_id: Option<String>,
pub step_ids: HashSet<String>,
}
pub struct TestModule {
pub specifier: ModuleSpecifier,
pub script_version: String,
pub defs: HashMap<String, TestDefinition>,
}
```
Storing the test tree as a literal tree diminishes the value of IDs,
even though vscode stores them that way. This makes all data easily
accessible from `TestModule`. It unifies the interface between
'discovered' and 'injected' tests. This unblocks some enhancements wrt
syncing tests between the LSP and extension, such as this TODO:
61f08d5a71/client/src/testing.ts (L251-L259)
and https://github.com/denoland/vscode_deno/issues/900. We should also
get more flexibility overall.
`TestCollector` is cleaned up, now stores a `&mut TestModule` directly
and registers tests as it comes across them with
`TestModule::register()`. This method ensures sanity in the redundant
data from having both of `TestDefinition::{parent_id,step_ids}`.
All of the messy conversions between `TestDescription`,
`LspTestDescription`, `TestDefinition`, `TestData` and `TestIdentifier`
are cleaned up. They shouldn't have been using `impl From` and now the
full list of tests is available to their implementations.
The motivation is If I'm using deno lint --rules, I want to see all the
rules especially the one that have no tags, since the recommend ones are
already active
This change also prints the tags associated with the rule inline.
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Disables `BenchContext::start()` and `BenchContext::end()` for low
precision benchmarks (less than 0.01s per iteration). Prints a warning
when they are used in such benchmarks, suggesting to remove them.
```ts
Deno.bench("noop", { group: "noops" }, () => {});
Deno.bench("noop with start/end", { group: "noops" }, (b) => {
b.start();
b.end();
});
```
Before:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.2 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 2.63 ns/iter 380,674,131.4 (2.45 ns … 27.78 ns) 2.55 ns 4.03 ns 5.33 ns
noop with start and end 302.47 ns/iter 3,306,146.0 (200 ns … 151.2 µs) 300 ns 400 ns 400 ns
summary
noop
115.14x faster than noop with start and end
```
After:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.1 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 3.01 ns/iter 332,565,561.7 (2.73 ns … 29.54 ns) 2.93 ns 5.29 ns 7.45 ns
noop with start and end 7.73 ns/iter 129,291,091.5 (6.61 ns … 46.76 ns) 7.87 ns 13.12 ns 15.32 ns
Warning start() and end() calls in "noop with start and end" are ignored because it averages less than 0.01s per iteration. Remove them for better results.
summary
noop
2.57x faster than noop with start and end
```
This PR adds a test reporter for the [Test Anything
Protocol](https://testanything.org).
It makes the following implementation decisions:
- No TODO pragma, as there is no such marker in `Deno.test`
- SKIP pragma for `ignore`d tests
- Test steps are treated as TAP14 subtests
- Support for this in consumers seems spotty
- Some consumers will incorrectly interpret these markers, resulting in
unexpected output
- Considering the lack of support, and to avoid implementation
complexity,
subtests are at most one level deep (all test steps are in the same
subtest)
- To accommodate consumers that use comments to indicate test-suites
(unspecced)
- The test module path is output as a comment
- This is disabled for `--parallel` testing
- Failure diagnostics are output as JSON, which is also valid YAML
- The structure is not specified, so the format roughly follows the spec
example:
```
---
message: "Failed with error 'hostname peebles.example.com not found'"
severity: fail
found:
hostname: 'peebles.example.com'
address: ~
wanted:
hostname: 'peebles.example.com'
address: '85.193.201.85'
at:
file: test/dns-resolve.c
line: 142
...
```
Some people might get think they need to import from this directory,
which could cause confusion and duplicate dependencies. Additionally,
the `vendor` directory has special behaviour in the language server, so
importing from the folder will definitely cause confusion and issues
there.
To fix bugs around detection of when node emulation is required, we will
just eagerly initialize it. The improvements we make to reduce the
impact of the startup time:
- [x] Process stdin/stdout/stderr are lazily created
- [x] node.js global proxy no longer allocates on each access check
- [x] Process checks for `beforeExit` listeners before doing expensive
shutdown work
- [x] Process should avoid adding global event handlers until listeners
are added
Benchmarking this PR (`89de7e1ff`) vs main (`41cad2179`)
```
12:36 $ third_party/prebuilt/mac/hyperfine --warmup 100 -S none './deno-41cad2179 run ./empty.js' './deno-89de7e1ff run ./empty.js'
Benchmark 1: ./deno-41cad2179 run ./empty.js
Time (mean ± σ): 24.3 ms ± 1.6 ms [User: 16.2 ms, System: 6.0 ms]
Range (min … max): 21.1 ms … 29.1 ms 115 runs
Benchmark 2: ./deno-89de7e1ff run ./empty.js
Time (mean ± σ): 24.0 ms ± 1.4 ms [User: 16.3 ms, System: 5.6 ms]
Range (min … max): 21.3 ms … 28.6 ms 126 runs
```
Fixes https://github.com/denoland/deno/issues/20142
Fixes https://github.com/denoland/deno/issues/15826
Fixes https://github.com/denoland/deno/issues/20028
Renames the unstable `deno_modules` directory and corresponding settings
to `vendor` after feedback. Also causes the vendoring of the
`node_modules` directory which can be disabled via
`--node-modules-dir=false` or `"nodeModulesDir": false`.
This commit adds a "dot" reporter to "deno test" subcommand,
that can be activated using "--dot" flag.
It provides a concise output using:
- "." for passing test
- "," for ignored test
- "!" for failing test
User output is silenced and not printed to the console.
In non-TTY environments each result is printed on a separate line.
This commit makes the following changes
- Created a `CompoundTestReporter` to allow us to use multiple reporters
- Implements `JUnitTestReporter` which writes JUnit XML to a path
- Added a CLI flag/option `--junit` that enables JUnit reporting. By
default this writes the report to `stdout` (and disables pretty
reporting). If a path is provided, it will write the JUnit report to
that file while the pretty reporter writes to stdout like normal
Output of `deno -- test --allow-all --unstable
--location=http://js-unit-tests/foo/bar --junit
cli/tests/unit/testing_test.ts `
```xml
<?xml version="1.0" encoding="UTF-8"?>
<testsuites name="deno test" tests="7" failures="0" errors="0" time="0.176">
<testsuite name="file:///Users/cooper/deno/deno/cli/tests/unit/testing_test.ts" tests="7" disabled="0" errors="0" failures="0">
<testcase name="testWrongOverloads" time="0.012">
</testcase>
<testcase name="nameOfTestCaseCantBeEmpty" time="0.009">
</testcase>
<testcase name="invalidStepArguments" time="0.008">
</testcase>
<testcase name="nameOnTextContext" time="0.029">
<properties>
<property name="step[passed]" value="step ... nested step"/>
<property name="step[passed]" value="step"/>
</properties>
</testcase>
<testcase name="originOnTextContext" time="0.030">
<properties>
<property name="step[passed]" value="step ... nested step"/>
<property name="step[passed]" value="step"/>
</properties>
</testcase>
<testcase name="parentOnTextContext" time="0.030">
<properties>
<property name="step[passed]" value="step ... nested step"/>
<property name="step[passed]" value="step"/>
</properties>
</testcase>
<testcase name="explicit undefined for boolean options" time="0.009">
</testcase>
</testsuite>
</testsuites>
```
Closes https://github.com/denoland/deno/issues/15277
This commit adds a single "warmup" run of empty function when running
`deno bench`.
This change will break so-called "JIT bias" which makes V8 optimize the
first function
and then bail out of optimization on second function. In essence the
"warmup" function
is getting optimized and then all user benches are bailed out of
optimization.
This PR breaks the addition of the `TestReporter` trait and refactoring
of `PrettyTestReporter` out of #19747. The goal is to enable the
addition of test reporters, including machine readable output.
…nclusion" (#19519)"
This reverts commit 28a4f3d0f5.
This change causes failures when used outside Deno repo:
```
============================================================
Deno has panicked. This is a bug in Deno. Please report this
at https://github.com/denoland/deno/issues/new.
If you can reliably reproduce this panic, include the
reproduction steps and re-run with the RUST_BACKTRACE=1 env
var set and include the backtrace in your report.
Platform: linux x86_64
Version: 1.34.3+b37b286
Args: ["/opt/hostedtoolcache/deno/0.0.0-b37b286f7fa68d5656f7c180f6127bdc38cf2cf5/x64/deno", "test", "--doc", "--unstable", "--allow-all", "--coverage=./cov"]
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: Failed to read "/home/runner/work/deno/deno/core/00_primordials.js"
Caused by:
No such file or directory (os error 2)', core/runtime/jsruntime.rs:699:8
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
```
Relands #19463. This time the `ExtensionFileSourceCode` enum is
preserved, so this effectively just splits feature
`include_js_for_snapshotting` into `exclude_js_sources` and
`runtime_js_sources`, adds a `force_include_js_sources` option on
`extension!()`, and unifies `ext::Init_ops_and_esm()` and
`ext::init_ops()` into `ext::init()`.
… (#19463)"
This reverts commit ceb03cfb03.
This is being reverted because it causes 3.5Mb increase in the binary
size,
due to runtime JS code being included in the binary, even though it's
already snapshotted.
CC @nayeemrmn
Remove `ExtensionFileSourceCode::LoadedFromFsDuringSnapshot` and feature
`include_js_for_snapshotting` since they leak paths that are only
applicable in this repo to embedders. Replace with feature
`exclude_js_sources`. Additionally the feature
`force_include_js_sources` allows negating it, if both features are set.
We need both of these because features are additive and there must be a
way of force including sources for snapshot creation while still having
the `exclude_js_sources` feature. `force_include_js_sources` is only set
for build deps, so sources are still excluded from the final binary.
You can also specify `force_include_js_sources` on any extension to
override the above features for that extension. Towards #19398.
But there was still the snapshot-from-snapshot situation where code
could be executed twice, I addressed that by making `mod_evaluate()` and
scripts like `core/01_core.js` behave idempotently. This allowed
unifying `ext::init_ops()` and `ext::init_ops_and_esm()` into
`ext::init()`.
This commit migrates "deno_core" from using "FuturesUnordered" to
"tokio::task::JoinSet". This makes every op to be a separate Tokio task
and should unlock better utilization of kqueue/epoll.
There were two quirks added to this PR:
- because of the fact that "JoinSet" immediately polls spawn tasks,
op sanitizers can give false positives in some cases, this was
alleviated by polling event loop once before running a test with
"deno test", which gives canceled ops an opportunity to settle
- "JsRuntimeState::waker" was moved to "OpState::waker" so that FFI
API can still use threadsafe functions - without this change the
registered wakers were wrong as they would not wake up the
whole "JsRuntime" but the task associated with an op
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
When someone explicitly opts into using the node_modules dir via
`--node-modules-dir` or setting `"nodeModulesDir": true` in the
deno.json file, we should eagerly ensure a top level package.json
install is done on startup. This was initially always done when we added
package.json support and a package.json was auto-discovered, but scaled
it back to be lazily done when a bare specifier matched an entry in the
package.json because of how disruptive it was for people using Deno
scripts in Node projects. That said, it does not make sense for someone
to opt-into having deno control and use their node_modules directory and
not want a package.json install to occur. If such a rare scenario
exists, the `DENO_NO_PACKAGE_JSON=1` environment variable can be set.
Ideally, we would only ever use a node_modules directory with this
explicit opt-in so everything is very clear, but we still have this
automatic scenario when there's a package.json in order to make more
node projects work out of the box.
We never properly added support for this. This fixes vendoring when it
has npm or node specifiers. Vendoring occurs by adding a
`"nodeModulesDir": true` property to deno.json then it uses a local
node_modules directory. This can be opted out by setting
`"nodeModulesDir": false` or running with `--node-modules-dir=false`.
Closes #18090
Closes #17210
Closes #17619
Closes #16778
Note: If the package information has already been cached, then this
requires running with `--reload` or for the registry information to be
fetched some other way (ex. the cache busting).
Closes #15544
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Partially supersedes #19016.
This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes
the requirement for `spawn` tasks to be `Send` given our single-threaded
executor.
While we don't need to technically do anything w/`spawn_blocking`, this
allows us to have a single `JoinHandle` type that works for both cases,
and allows us to more easily experiment with alternative
`spawn_blocking` implementations that do not require tokio (ie: rayon).
Async ops (+~35%):
Before:
```
time 1310 ms rate 763358
time 1267 ms rate 789265
time 1259 ms rate 794281
time 1266 ms rate 789889
```
After:
```
time 956 ms rate 1046025
time 954 ms rate 1048218
time 924 ms rate 1082251
time 920 ms rate 1086956
```
HTTP serve (+~4.4%):
Before:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 68.78us 19.77us 1.43ms 86.84%
Req/Sec 68.78k 5.00k 73.84k 91.58%
1381833 requests in 10.10s, 167.36MB read
Requests/sec: 136823.29
Transfer/sec: 16.57MB
```
After:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 63.12us 17.43us 1.11ms 85.13%
Req/Sec 71.82k 3.71k 77.02k 79.21%
1443195 requests in 10.10s, 174.79MB read
Requests/sec: 142921.99
Transfer/sec: 17.31MB
```
Suggested-By: alice@ryhl.io
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This is the initial support for npm and node specifiers in `deno
compile`. The npm packages are included in the binary and read from it via
a virtual file system. This also supports the `--node-modules-dir` flag,
dependencies specified in a package.json, and npm binary commands (ex.
`deno compile --unstable npm:cowsay`)
Closes #16632
This removes `ProcState` and replaces it with a new `CliFactory` which
initializes our "service structs" on demand. This isn't a performance
improvement at the moment for `deno run`, but might unlock performance
improvements in the future.
Fixes regression from #18878 where `Promise.reject()`,
`Promise.reject(undefined)` and `reportError(undefined)` panic in the
REPL.
Fixes `throw undefined` printing `Uncaught Unknown exception` instead of
`Uncaught undefined`.
Fixes #8858.
Fixes #8869.
```
$ target/debug/deno
Deno 1.32.5
exit using ctrl+d, ctrl+c, or close()
REPL is running with all permissions allowed.
To specify permissions, run `deno repl` with allow flags.
> Promise.reject(new Error("bar"));
Promise { <rejected> Error: bar
at <anonymous>:2:16 }
Uncaught (in promise) Error: bar
at <anonymous>:2:16
> reportError(new Error("baz"));
undefined
Uncaught Error: baz
at <anonymous>:2:13
>
This is just a straight refactor and I didn't do any cleanup in
ext/node. After this PR we can start to clean it up and make things
private that don't need to be public anymore.
1. Adds cli/standalone folder
2. Writes the bytes directly to the output file. When adding npm
packages this might get quite large, so let's not keep the final output
in memory just in case.
1. Breaks up functionality within `ProcState` into several other structs
to break out the responsibilities (`ProcState` is only a data struct
now).
2. Moves towards being able to inject dependencies more easily and have
functionality only require what it needs.
3. Exposes `Arc<T>` around the "service structs" instead of it being
embedded within them. The idea behind embedding them was to reduce the
verbosity of needing to pass around `Arc<...>`, but I don't think it was
exactly working and as we move more of these structs to be more
injectable I don't think the extra verbosity will be a big deal.
Removes the functions in the `emit` module and replaces them with an
`Emitter` struct that can have "ctor dependencies" injected rather than
using functions to pass along the dependencies.
This is part of a long term refactor to move more functionality out of
proc state.
Stores the test/bench functions in rust op state during registration.
The functions are wrapped in JS first so that they return a directly
convertible `TestResult`/`BenchResult`. Test steps are still mostly
handled in JS since they are pretty much invoked by the user. Allows
removing a bunch of infrastructure for communicating between JS and
rust. Allows using rust utilities for things like shuffling tests
(`Vec::shuffle`). We can progressively move op and resource sanitization
to rust as well.
Fixes #17122.
Fixes #17312.
- bump deps: the newest `lazy-regex` need newer `oncecell` and
`regex`
- reduce `unwrap`
- remove dep `lazy_static`
- make more regex cached
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This is a follow-on to the earlier work in reducing string copies,
mainly focused on ensuring that ASCII strings are easy to provide to the
JS runtime.
While we are replacing a 16-byte reference in a number of places with a
24-byte structure (measured via `std::mem::size_of`), the reduction in
copies wins out over the additional size of the arguments passed into
functions.
Benchmarking shows approximately the same if not slightly less wallclock
time/instructions retired, but I believe this continues to open up
further refactoring opportunities.
1. Fixes a cosmetic issue in the repl where it would display lsp warning
messages.
2. Lazily loads dependencies from the package.json on use.
3. Supports using bare specifiers from package.json in the REPL.
Closes #17929
Closes #18494
1. Rewrites the tests to be more back and forth rather than getting the
output all at once (which I believe was causing the hangs on linux and
maybe mac)
2. Runs the pty tests on the linux ci.
3. Fixes a bunch of tests that were just wrong.
4. Adds timeouts on the pty tests.
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.
The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:
1. Retry twice to open on-disk caches
2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).
The caches all use the same general code now, and share the cache
failure recovery policy.
In addition, it cleans up a TODO in the `NodeAnalysisCache`.
Reduce the number of copies and allocations of script code by carrying
around ownership/reference information from creation time.
As an advantage, this allows us to maintain the identity of `&'static
str`-based scripts and use v8's external 1-byte strings (to avoid
incorrectly passing non-ASCII strings, debug `assert!`s gate all string
reference paths).
Benchmark results:
Perf improvements -- ~0.1 - 0.2ms faster, but should reduce garbage
w/external strings and reduces data copies overall. May also unlock some
more interesting optimizations in the future.
This requires adding some generics to functions, but manual
monomorphization has been applied (outer/inner function) to avoid code
bloat.
Moving some code around in `ext/node` is it's a bit better well defined
and makes it possible for others to embed it.
I expect to see no difference in startup perf with this change.