Improves #19100
Fixes #20356
Replaces #20428
Changes made in deno_core to support this:
- [x] Errors must be handled in setTimeout callbacks
- [x] Microtask ordering is not-quite-right
- [x] Timer cancellation must be checked right before dispatch
- [x] Timer sanitizer
- [x] Move high-res timer to deno_core
- [x] Timers need opcall tracing
Supply chain security for JSR.
```
$ deno publish --provenance
Successfully published @divy/test_provenance@0.0.3
Provenance transparency log available at https://search.sigstore.dev/?logIndex=73657418
```
0. Package has been published.
1. Fetches the version manifest and verifies it's matching with uploaded
files and exports.
2. Builds the attestation SLSA payload using Github actions env.
3. Creates an ephemeral key pair for signing the github token
(aud=sigstore) and DSSE pre authentication tag.
4. Requests a X.509 signing certificate from Fulcio using the challenge
and ephemeral public key PEM.
5. Prepares a DSSE envelop for Rekor to witness. Posts an intoto entry
to Rekor and gets back the transparency log index.
6. Builds the provenance bundle and posts it to JSR.
Updates dependent crates which includes an investigation fix by @irbull
in Deno's LSP when linting code. Huge thanks to Ian for tracking down
this issue.
Also includes Divy's deno_graph executor change, which reduces memory
usage when loading jsr specifiers and makes them faster.
Co-authored-by: irbull <irbull@users.noreply.github.com>
Co-authored-by: littledivy <littledivy@users.noreply.github.com>
When using a prefix or suffix containing an invalid filename character,
it's not entirely clear where the errors come from. We make these errors
more consistent across platforms.
In addition, all permission prompts for tempfile and tempdir were
printing the same API name.
We also take the opportunity to make the tempfile random space larger by
2x (using a base32-encoded u64 rather than a hex-encoded u32).
1. Renames zap/fast-check to instead be a `no-slow-types` lint rule.
1. This lint rule is automatically run when doing `deno lint` for
packages (deno.json files with a name, version, and exports field)
1. This lint rules still occurs on publish. It can be skipped by running
with `--no-slow-types`
This changes the lockfile to not store JSR specifiers in the "remote"
section. Instead a single JSR integrity is stored per package in the
lockfile, which is a hash of the version's `x.x.x_meta.json` file, which
contains hashes for every file in the package. The hashes in this file
are then compared against when loading.
Additionally, when using `{ "vendor": true }` in a deno.json, the files
can be modified without causing lockfile errors—the checksum is only
checked when copying into the vendor folder and not afterwards
(eventually we should add this behaviour for non-jsr specifiers as
well). As part of this change, the `vendor` folder creation is not
always automatic in the LSP and running an explicit cache command is
necessary. The code required to track checksums in the LSP would have
been too complex for this PR, so that all goes through deno_graph now.
The vendoring is still automatic when running from the CLI.
This implementation heavily depends on there being a lockfile, meaning
JSR specifiers will always diagnose as uncached unless it's there. In
practice this affects cases where a `deno.json` isn't being used. Our
NPM specifier support isn't subject to this.
The reason for this is that the version constraint solving code is
currently buried in `deno_graph` and not usable from the LSP, so the
only way to reuse that logic is the solved-version map in the lockfile's
`packages.specifiers`.
Removes the `FileFetcher`'s internal cache because I don't believe it's
necessary (we already cache this kind of stuff in places like deno_graph
or config files in different places). Removing it fixes this bug because
this functionality was already implemented in deno_graph and lowers
memory usage of the CLI a little bit.
This PR separates integration tests from CLI tests into a new project
named `cli_tests`. This is a prerequisite for an integration test runner
that can work with either the CLI binary in the current project, or one
that is built ahead of time.
## Background
Rust does not have the concept of artifact dependencies yet
(https://github.com/rust-lang/cargo/issues/9096). Because of this, the
only way we can ensure a binary is built before running associated tests
is by hanging tests off the crate with the binary itself.
Unfortunately this means that to run those tests, you _must_ build the
binary and in the case of the deno executable that might be a 10 minute
wait in release mode.
## Implementation
To allow for tests to run with and without the requirement that the
binary is up-to-date, we split the integration tests into a project of
their own. As these tests would not require the binary to build itself
before being run as-is, we add a stub integration `[[test]]` target in
the `cli` project that invokes these tests using `cargo test`.
The stub test runner we add has `harness = false` so that we can get
access to a `main` function. This `main` function's sole job is to
`execvp` the command `cargo test -p deno_cli`, effectively "calling"
another cargo target.
This ensures that the deno executable is always correctly rebuilt before
running the stub test runner from `cli`, and gets us closer to be able
to run the entire integration test suite on arbitrary deno executables
(and therefore split the build into multiple phases).
The new `cli_tests` project lives within `cli` to avoid a large PR. In
later PRs, the test data will be split from the `cli` project. As there
are a few thousand files, it'll be better to do this as a completely
separate PR to avoid noise.
This moves the op sanitizer descriptions into Rust code and prepares for
eventual op import from `ext:core/ops`. We cannot import these ops from
`ext:core/ops` as the testing infrastructure ops are not always present.
Changes:
- Op descriptions live in `cli` code and are currently accessible via an
op for the older sanitizer code
- `phf` dep moved to workspace root so we can use it here
- `ops.op_XXX` changed to to `op_XXX` to prepare for op imports later
on.
Migrations:
- Error registration no longer required for Interrupted or BadResource
(these are core exception)
- `include_js_files!`/`ExtensionFileSource` changes
This commit adds support for [TC39 Decorator
Proposal](https://github.com/tc39/proposal-decorators).
These decorators are only available in transpiled sources - ie.
non-JavaScript files (because of lack of support in V8).
This entails that "experimental TypeScript decorators" are not available
by default
and require to be configured, with a configuration like this:
```
{
"compilerOptions": {
"experimentalDecorators": true
}
}
```
Closes https://github.com/denoland/deno/issues/19160
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: crowlkats <crowlkats@toaxl.com>
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
This initially uses the new diagnostic printer in `deno lint`,
`deno doc` and `deno publish`. In the limit we should also update
`deno check` to use this printer.
Introduces the first cppgc backed Resource into Deno.
This fixes the memory leak when using `X509Certificate`
**Comparison**:
```js
import { X509Certificate } from 'node:crypto';
const r = Deno.readFileSync('cli/tests/node_compat/test/fixtures/keys/agent1-cert.pem');
setInterval(() => {
for (let i = 0; i < 10000; i++) {
const cert = new X509Certificate(r);
}
}, 1000);
```
Memory usage after 5 secs
`main`: 1692MB
`cppgc`: peaks at 400MB
Part 1 of a potential 3 part series. Ref #13449
The current implementation passes key material back and forth RustCrypto
group of crates and ring. ring does not implement p521 yet.
This PR adds support for P521 named curve in `generateKey` and
`importKey` where we use RustCrypto. Other parts should be moved over to
the RustGroup group of crates for consistency.
Main change is that:
- "hyper" has been renamed to "hyper_v014" to signal that it's legacy
- "hyper1" has been renamed to "hyper" and should be the default
This PR implements the child_process IPC pipe between parent and child.
The implementation uses Windows named pipes created by parent and passes
the inheritable file handle to the child.
I've also replace parts of the initial implementation which passed the
raw parent fd to JS with resource ids instead. This way no file handle
is exposed to the JS land (both parent and child).
`IpcJsonStreamResource` can stream upto 800MB/s of JSON data on Win 11
AMD Ryzen 7 16GB (without `memchr` vectorization)
Bumped versions for 1.39.0
Please ensure:
- [x] Target branch is correct (`vX.XX` if a patch release, `main` if
minor)
- [x] Crate versions are bumped correctly
- [x] deno_std version is incremented in the code (see
`cli/deno_std.rs`)
- [x] Releases.md is updated correctly (think relevancy and remove
reverts)
To make edits to this PR:
```shell
git fetch upstream release_1_39.0 && git checkout -b release_1_39.0 upstream/release_1_39.0
```
cc @mmastrac
---------
Co-authored-by: mmastrac <mmastrac@users.noreply.github.com>
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR implements the Node child_process IPC functionality in Deno on
Unix systems.
For `fd > 2` a duplex unix pipe is set up between the parent and child
processes. Currently implements data passing via the channel in the JSON
serialization format.
This commit brings back usage of primordials in "40_testing.js" by
turning it back into an ES module and using new "lazy loading" functionality
of ES modules coming from "deno_core".
The same approach was applied to "40_jupyter.js".
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Adds an `--unstable-sloppy-imports` flag which supports the
following for `file:` specifiers:
* Allows writing `./mod` in a specifier to do extension probing.
- ex. `import { Example } from "./example"` instead of `import { Example
} from "./example.ts"`
* Allows writing `./routes` to do directory extension probing for files
like `./routes/index.ts`
* Allows writing `./mod.js` for *mod.ts* files.
This functionality is **NOT RECOMMENDED** for general use with Deno:
1. It's not as optimal for perf:
https://marvinh.dev/blog/speeding-up-javascript-ecosystem-part-2/
1. It makes tooling in the ecosystem more complex in order to have to
understand this.
1. The "Deno way" is to be explicit about what you're doing. It's better
in the long run.
1. It doesn't work if published to the Deno registry because doing stuff
like extension probing with remote specifiers would be incredibly slow.
This is instead only recommended to help with migrating existing
projects to Deno. For example, it's very useful for getting CJS projects
written with import/export declaration working in Deno without modifying
module specifiers and for supporting TS ESM projects written with
`./mod.js` specifiers.
This feature will output warnings to guide the user towards correcting
their specifiers. Additionally, quick fixes are provided in the LSP to
update these specifiers:
Landing changes required for
https://github.com/denoland/deno_core/pull/359
We needed to update 99_main.js and a whole load of tests.
API changes:
- setPromiseRejectCallback becomes setUnhandledPromiseRejectionHandler.
The function is now called from eventLoopTick.
- The promiseRejectMacrotaskCallback no longer exists, as this is
automatically handled in eventLoopTick.
- ops.op_dispatch_exception now takes a second parameter: in_promise.
The preferred way to call this op is now reportUnhandledException or
reportUnhandledPromiseRejection.
This commit adds support for a new `kv.watch()` method that allows
watching for changes to a key-value pair. This is useful for cases
where you want to be notified when a key-value pair changes, but
don't want to have to poll for changes.
---------
Co-authored-by: losfair <zhy20000919@hotmail.com>
This PR causes Deno to include more files in the graph based on how a
template literal looks that's provided to a dynamic import:
```ts
const file = await import(`./dir/${expr}`);
```
In this case, it will search the `dir` directory and descendant
directories for any .js/jsx/etc modules and include them in the graph.
To opt out of this behaviour, move the template literal to a separate
line:
```ts
const specifier = `./dir/${expr}`
const file = await import(specifier);
```
Switch `ext/fetch` over to `resourceForReadableStream` to simplify and
unify implementation with `ext/serve`. This allows us to work in Rust
with resources only.
Two additional changes made to `resourceForReadableStream` were
required:
- Add an optional length to `resourceForReadableStream` which translates
to `size_hint`
- Fix a bug where writing to a closed stream that was full would panic
Rust 1.74 may have made this code temporarily valid in [#113126 Replace
old private-in-public diagnostic with type privacy
lints](https://github.com/rust-lang/rust/pull/113126), so we didn't
catch it at build time.
It fails in 1.73 and +nightly, however.
This commit adds unstable workspace support. This is extremely
bare-bones and
minimal first-pass at this.
With this change `deno.json` supports specifying `workspaces` key, that
accepts a list of subdirectories. Each workspace can have its own import
map. It's required to specify a `"name"` and `"version"` properties in the
configuration file for the workspace:
```jsonc
// deno.json
{
"workspaces": [
"a",
"b"
},
"imports": {
"express": "npm:express@5"
}
}
```
``` jsonc
// a/deno.json
{
"name": "a",
"version": "1.0.2",
"imports": {
"kleur": "npm:kleur"
}
}
```
```jsonc
// b/deno.json
{
"name": "b",
"version": "0.51.0",
"imports": {
"chalk": "npm:chalk"
}
}
```
`--unstable-workspaces` flag is required to use this feature:
```
$ deno run --unstable-workspaces mod.ts
```
---------
Co-authored-by: David Sherret <dsherret@gmail.com>
Fixes #21121 and #19498
Migrates fully to rustls_tokio_stream. We no longer need to maintain our
own TlsStream implementation to properly support duplex.
This should fix a number of errors with TLS and websockets, HTTP and
"other" places where it's failing.
Makes the JavaScript Request use a v8:External opaque pointer to
directly refer to the Rust HttpRecord.
The HttpRecord is now reference counted. To avoid leaks the strong count
is checked at request completion.
Performance seems unchanged on the minimal benchmark. 118614 req/s this
branch vs 118564 req/s on main, but variance between runs on my laptop
is pretty high.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>