Fixes https://github.com/denoland/deno/issues/19816
In that issue, I suggest switching over the other brotli functionality
to the Rust API provided by the `brotli` crate. Here, I only do that
with the `brotli_decompress` function to fix the bug with buffers longer
than 4096 bytes.
Keys are expensive metadata. We track it for various purposes, e.g.
transaction conflict check, and key expiration.
This patch limits the total key size in an atomic operation to 80 KiB
(81920 bytes). This helps ensure efficiency in implementations.
Fixes #19802.
Properly respect when clients do not have the `workspace/configuration`
capability, a.k.a. when an editor cannot provide scoped settings on
request from the LSP.
- Fix one spot where we weren't checking for the capability before
sending this request.
- For `enablePaths`, fall back to the settings passed in the
initialization options in more cases.
- Respect the `workspace/configuration` capability in the test harness
client.
See:
https://microsoft.github.io/language-server-protocol/specifications/lsp/3.17/specification/#workspace_configuration.
Closes #14122.
Adds two extensions to `--allow-run` behaviour:
- When `--allow-run=foo` is specified and `foo` is found in the `PATH`
at startup, `RunDescriptor::Path(which("foo"))` is added to the
allowlist alongside `RunDescriptor::Name("foo")`. Currently only the
latter is.
- When run permission for `foo` is queried and `foo` is found in the
`PATH` at runtime, either `RunDescriptor::Path(which("foo"))` or
`RunDescriptor::Name("foo")` would qualify in the allowlist. Currently
only the latter does.
We never want tests to hit the real npm registry because this causes
test flakes. In addition, we set a sentinal "unset" value for
`NPM_CONFIG_REGISTRY` to ensure that all tests requiring npm go through
the test server.
The fix for #20188 was not entirely correct -- we were unlocking the
global buffer incorrectly. This PR introduces a lock state that ensures
we only unlock a lock we have taken out.
When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
<!--
Before submitting a PR, please read https://deno.com/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
As the title.
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Disables `BenchContext::start()` and `BenchContext::end()` for low
precision benchmarks (less than 0.01s per iteration). Prints a warning
when they are used in such benchmarks, suggesting to remove them.
```ts
Deno.bench("noop", { group: "noops" }, () => {});
Deno.bench("noop with start/end", { group: "noops" }, (b) => {
b.start();
b.end();
});
```
Before:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.2 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 2.63 ns/iter 380,674,131.4 (2.45 ns … 27.78 ns) 2.55 ns 4.03 ns 5.33 ns
noop with start and end 302.47 ns/iter 3,306,146.0 (200 ns … 151.2 µs) 300 ns 400 ns 400 ns
summary
noop
115.14x faster than noop with start and end
```
After:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.1 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 3.01 ns/iter 332,565,561.7 (2.73 ns … 29.54 ns) 2.93 ns 5.29 ns 7.45 ns
noop with start and end 7.73 ns/iter 129,291,091.5 (6.61 ns … 46.76 ns) 7.87 ns 13.12 ns 15.32 ns
Warning start() and end() calls in "noop with start and end" are ignored because it averages less than 0.01s per iteration. Remove them for better results.
summary
noop
2.57x faster than noop with start and end
```
Fixes https://github.com/denoland/vscode_deno/issues/743.
```ts
const items: string[] = ['foo', 'bar', 'baz'];
items.map
// ->
items.map(callbackfn) // auto-completes with argument placeholders.
```
---
We have our own setting for `suggest.completeFunctionCalls`, which must
be enabled:
```js
{
"deno.suggest.completeFunctionCalls": true,
// Re-implementation of:
// "javascript.suggest.completeFunctionCalls": true,
// "typescript.suggest.completeFunctionCalls": true,
}
```
But before this commit the actual implementation had been left as a TODO.
This PR adds a test reporter for the [Test Anything
Protocol](https://testanything.org).
It makes the following implementation decisions:
- No TODO pragma, as there is no such marker in `Deno.test`
- SKIP pragma for `ignore`d tests
- Test steps are treated as TAP14 subtests
- Support for this in consumers seems spotty
- Some consumers will incorrectly interpret these markers, resulting in
unexpected output
- Considering the lack of support, and to avoid implementation
complexity,
subtests are at most one level deep (all test steps are in the same
subtest)
- To accommodate consumers that use comments to indicate test-suites
(unspecced)
- The test module path is output as a comment
- This is disabled for `--parallel` testing
- Failure diagnostics are output as JSON, which is also valid YAML
- The structure is not specified, so the format roughly follows the spec
example:
```
---
message: "Failed with error 'hostname peebles.example.com not found'"
severity: fail
found:
hostname: 'peebles.example.com'
address: ~
wanted:
hostname: 'peebles.example.com'
address: '85.193.201.85'
at:
file: test/dns-resolve.c
line: 142
...
```
Fixes https://github.com/denoland/vscode_deno/issues/843.
Prevents step results from being reported twice. Refactors
`LspTestReporter` to use a complete `(test_id, descriptor)` map instead
of a brittle `LspTestReporter::stack`.
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Some people might get think they need to import from this directory,
which could cause confusion and duplicate dependencies. Additionally,
the `vendor` directory has special behaviour in the language server, so
importing from the folder will definitely cause confusion and issues
there.
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.
The goal of this PR is to address issue #20106 where a `TypeError`
occurs when the variables `uid` and `gid` from `userInfo()` in `node:os`
are reassigned if the user is on Windows. Both `uid` and `gid` are
marked as `const` therefore producing a `TypeError` when the two are
reassigned.
This PR achieves that goal by marking `uid` and `gid` as `let`
The goal of this PR is to address issue #19520 where Deno panics when
encountering an invalid SSL certificate.
This PR achieves that goal by removing an `.expect()` statement and
implementing a match statement on `tsl_config` (found in
[/ext/net/ops_tsl.rs](e071382768/ext/net/ops_tls.rs (L1058)))
to check whether the desired configuration is valid
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
Handles ASCCI espace chars in test and bench name making
test and bench reporting more reliable. This one is also tested
in the fixture of "node:test" module.
This commit moves `snapshot_from_lockfile` function to [deno_npm
crate](https://github.com/denoland/deno_npm). This allows this function
to be called outside Deno CLI (in particular, Deno Deploy).
Renames the unstable `deno_modules` directory and corresponding settings
to `vendor` after feedback. Also causes the vendoring of the
`node_modules` directory which can be disabled via
`--node-modules-dir=false` or `"nodeModulesDir": false`.
This changes the design of the manifest.json file to have a separate
"folders" map for mapping hashed directories. This allows, for example,
to add files in a folder like `http_localhost_8000/#testing_5de71/` and
have them be resolved automatically as long as their remaining
components are identity-mappable to the file system (not hashed). It
also saves space in the manifest.json file by only including the hashed
directory instead of each descendant file.
```
// manifest.json
{
"folders": {
"https://localhost/NOT_MAPPABLE/": "localhost/#not_mappable_5cefgh"
},
"modules": {
"https://localhost/folder/file": {
"headers": {
"content-type": "application/javascript"
}
},
}
}
// folder structure
localhost
- folder
- #file_2defn (note: I've made up the hashes in these examples)
- #not_mappable_5cefgh
- mod.ts
- etc.ts
- more_files.ts
```
This commit adds new "--deny-*" permission flags. These are complimentary to
"--allow-*" flags.
These flags can be used to restrict access to certain resources, even if they
were granted using "--allow-*" flags or the "--allow-all" ("-A") flag.
Eg. specifying "--allow-read --deny-read" will result in a permission error,
while "--allow-read --deny-read=/etc" will allow read access to all FS but the
"/etc" directory.
Runtime permissions APIs ("Deno.permissions") were adjusted as well, mainly
by adding, a new "PermissionStatus.partial" field. This field denotes that
while permission might be granted to requested resource, it's only partial (ie.
a "--deny-*" flag was specified that excludes some of the requested resources).
Eg. specifying "--allow-read=foo/ --deny-read=foo/bar" and then querying for
permissions like "Deno.permissions.query({ name: "read", path: "foo/" })"
will return "PermissionStatus { state: "granted", onchange: null, partial: true }",
denoting that some of the subpaths don't have read access.
Closes #18804.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: Nayeem Rahman <nayeemrmn99@gmail.com>
This commit adds a "dot" reporter to "deno test" subcommand,
that can be activated using "--dot" flag.
It provides a concise output using:
- "." for passing test
- "," for ignored test
- "!" for failing test
User output is silenced and not printed to the console.
In non-TTY environments each result is printed on a separate line.
This commit provides basic polyfill for "node:test" module. Currently
only top-level "test" function is polyfilled, all remaining functions from
that module throw not implemented errors.
Closes https://github.com/denoland/deno/issues/17251
Closes #19970
This commits adds logic to retry failed module downloads once.
Both request and server errors are handled and the retry is done after
50 ms wait time.
I'm not sure why, but sending SIGABRT to Deno on my machine as part of
this test causes it to lock up very badly, leaving it in an unkillable
`UE+` state.
This showed up after #19333, but was not caused by it.
Closes #17589.
```ts
Deno.bench("foo", async (t) => {
const resource = setup(); // not included in measurement
t.start();
measuredOperation(resource);
t.end();
resource.close(); // not included in measurement
});
```
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.