Previously:
```rust
pub struct TestDefinition {
pub id: String,
pub name: String,
pub range: SourceRange,
pub steps: Vec<TestDefinition>,
}
pub struct TestDefinitions {
pub discovered: Vec<TestDefinition>,
pub injected: Vec<lsp_custom::TestData>,
pub script_version: String,
}
```
Now:
```rust
pub struct TestDefinition {
pub id: String,
pub name: String,
pub range: Option<Range>,
pub is_dynamic: bool, // True for 'injected' module, not statically detected but added at runtime.
pub parent_id: Option<String>,
pub step_ids: HashSet<String>,
}
pub struct TestModule {
pub specifier: ModuleSpecifier,
pub script_version: String,
pub defs: HashMap<String, TestDefinition>,
}
```
Storing the test tree as a literal tree diminishes the value of IDs,
even though vscode stores them that way. This makes all data easily
accessible from `TestModule`. It unifies the interface between
'discovered' and 'injected' tests. This unblocks some enhancements wrt
syncing tests between the LSP and extension, such as this TODO:
61f08d5a71/client/src/testing.ts (L251-L259)
and https://github.com/denoland/vscode_deno/issues/900. We should also
get more flexibility overall.
`TestCollector` is cleaned up, now stores a `&mut TestModule` directly
and registers tests as it comes across them with
`TestModule::register()`. This method ensures sanity in the redundant
data from having both of `TestDefinition::{parent_id,step_ids}`.
All of the messy conversions between `TestDescription`,
`LspTestDescription`, `TestDefinition`, `TestData` and `TestIdentifier`
are cleaned up. They shouldn't have been using `impl From` and now the
full list of tests is available to their implementations.
We never want tests to hit the real npm registry because this causes
test flakes. In addition, we set a sentinal "unset" value for
`NPM_CONFIG_REGISTRY` to ensure that all tests requiring npm go through
the test server.
Adds `runtime/shared.rs` which is imported by both `build.rs` and the
rest of the crate, containing utilities used by both.
Renames the `snapshot_from_snapshot` feature to
`exclude_runtime_main_js` since that's what it does and it's relevant
outside of snapshotting when `__runtime_js_sources` is specified.
The fix for #20188 was not entirely correct -- we were unlocking the
global buffer incorrectly. This PR introduces a lock state that ensures
we only unlock a lock we have taken out.
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Disables `BenchContext::start()` and `BenchContext::end()` for low
precision benchmarks (less than 0.01s per iteration). Prints a warning
when they are used in such benchmarks, suggesting to remove them.
```ts
Deno.bench("noop", { group: "noops" }, () => {});
Deno.bench("noop with start/end", { group: "noops" }, (b) => {
b.start();
b.end();
});
```
Before:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.2 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 2.63 ns/iter 380,674,131.4 (2.45 ns … 27.78 ns) 2.55 ns 4.03 ns 5.33 ns
noop with start and end 302.47 ns/iter 3,306,146.0 (200 ns … 151.2 µs) 300 ns 400 ns 400 ns
summary
noop
115.14x faster than noop with start and end
```
After:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.1 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 3.01 ns/iter 332,565,561.7 (2.73 ns … 29.54 ns) 2.93 ns 5.29 ns 7.45 ns
noop with start and end 7.73 ns/iter 129,291,091.5 (6.61 ns … 46.76 ns) 7.87 ns 13.12 ns 15.32 ns
Warning start() and end() calls in "noop with start and end" are ignored because it averages less than 0.01s per iteration. Remove them for better results.
summary
noop
2.57x faster than noop with start and end
```
Fixes https://github.com/denoland/vscode_deno/issues/743.
```ts
const items: string[] = ['foo', 'bar', 'baz'];
items.map
// ->
items.map(callbackfn) // auto-completes with argument placeholders.
```
---
We have our own setting for `suggest.completeFunctionCalls`, which must
be enabled:
```js
{
"deno.suggest.completeFunctionCalls": true,
// Re-implementation of:
// "javascript.suggest.completeFunctionCalls": true,
// "typescript.suggest.completeFunctionCalls": true,
}
```
But before this commit the actual implementation had been left as a TODO.
Fixes https://github.com/denoland/vscode_deno/issues/843.
Prevents step results from being reported twice. Refactors
`LspTestReporter` to use a complete `(test_id, descriptor)` map instead
of a brittle `LspTestReporter::stack`.
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Bumps [trust-dns-server](https://github.com/bluejekyll/trust-dns) from
0.22.0 to 0.22.1.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/bluejekyll/trust-dns/releases">trust-dns-server's
releases</a>.</em></p>
<blockquote>
<h2>v0.22.1 - Deny response requests on the server</h2>
<h2>0.22.1</h2>
<h3>Fixed</h3>
<ul>
<li>(server) drop response messages <a
href="https://redirect.github.com/bluejekyll/trust-dns/issues/1952">#1952</a>
by <a href="https://github.com/djc"><code>@djc</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/bluejekyll/trust-dns/blob/main/CHANGELOG.md">trust-dns-server's
changelog</a>.</em></p>
<blockquote>
<h2>0.22.1</h2>
<h3>Fixed</h3>
<ul>
<li>(server) drop response messages <a
href="https://redirect.github.com/bluejekyll/trust-dns/issues/1952">#1952</a>
by <a href="https://github.com/djc"><code>@djc</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="9f344b54cd"><code>9f344b5</code></a>
bump server and bin crates to 0.22.1</li>
<li><a
href="9e6e77293b"><code>9e6e772</code></a>
update CHANGELOG for 0.22.1</li>
<li><a
href="5f6278154d"><code>5f62781</code></a>
Regenerate the test SSL certificates</li>
<li><a
href="2fd2603171"><code>2fd2603</code></a>
server: drop response messages</li>
<li>See full diff in <a
href="https://github.com/bluejekyll/trust-dns/compare/v0.22.0...v0.22.1">compare
view</a></li>
</ul>
</details>
<br />
[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=trust-dns-server&package-manager=cargo&previous-version=0.22.0&new-version=0.22.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts page](https://github.com/denoland/deno/network/alerts).
</details>
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Few improvements to FFI types:
1. Export `PointerObject` for convenience. It's fairly commonly used in
library code and thus should be exported.
2. Fix various comments around `PointerValue` and `UnsafePointer` and
expand upon them to better reflect reality.
3. Instead of using a `Record<"value", type>[T]` for determining the
type of an FFI symbol parameter use direct `T extends "value" ? type :
never` comparison.
The last part enables smuggling extra information into the parameter and
return value string declarations at the type level. eg. Instead of just
`"u8"` the parameter can be `"u8" & { [brand]: T }` for some `T extends
number`. That `T` can then be extracted from the parameter to form the
TypeScript function's parameter or return value type. Essentially, this
enables type-safe FFI!
The foremost use-cases for this are enums and pointer safety. These are
implemented in the second commit which should enable, in a backwards
compatible way, for pointer parameters to declare what sort of pointer
they mean, functions to declare what the API definition of the native
function is, and for numbers to declare what Enum they stand for (if
any).
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Some people might get think they need to import from this directory,
which could cause confusion and duplicate dependencies. Additionally,
the `vendor` directory has special behaviour in the language server, so
importing from the folder will definitely cause confusion and issues
there.
Fixes this error message:
```
error: missing field `now` at line 32 column 1
```
This would occur if someone used an old version of the deno_cache
library to cache information in the cache then tried to load it with the
latest CLI. Regression in the last patch when migrating to the
deno_cache_dir crate.
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.