fixes #20454
Current KV queues implementation assumes that `enqueue` and
`listenQueue` are called on the same instance of `Deno.Kv`. It's
possible that the same Deno process opens multiple KV instances pointing
to the same fs path, and in that case `listenQueue` should still get
notified of messages enqueued through a different KV instance.
This makes `CliNpmResolver` a trait. The terminology used is:
- **managed** - Deno manages the node_modules folder and does an
auto-install (ex. `ManagedCliNpmResolver`)
- **byonm** - "Bring your own node_modules" (ex. `ByonmCliNpmResolver`,
which is in this PR, but unimplemented at the moment)
Part of #18967
For the following example, if I set the encoding to `base64url`, it'll
throw an unexpected TypeError:
```ts
import { Buffer } from "node:buffer";
Buffer.from("IntcImhlbGxvXCI6XCJoZGQvZStpXCJ9Ig", "base64url").toString();
// error: Uncaught TypeError: src.subarray is not a function
// const buf = Buffer.from(
// ^
// at blitBuffer (ext:deno_node/internal/buffer.mjs:1779:15)
// at Uint8Array.base64urlWrite (ext:deno_node/internal/buffer.mjs:691:10)
// at Object.write (ext:deno_node/internal/buffer.mjs:2195:11)
// at Uint8Array.write (ext:deno_node/internal/buffer.mjs:794:14)
// at fromString (ext:deno_node/internal/buffer.mjs:214:22)
// at _from (ext:deno_node/internal/buffer.mjs:119:12)
// at Function.from (ext:deno_node/internal/buffer.mjs:157:10)
// at file:///Users/foodieats/temp/buffer1.ts:3:20
```
The error caused by `base64urlWrite` function, it should call
`forgivingBase64UrlDecode` not `forgivingBase64UrlEncode`
Also fixed #20563 .
This API is providing hoops to jump through with undergoing migration to
`#[op2]` macro.
The overhead of supporting this API is non-trivial and besides internal
use of it in test sanitizers is very rarely used in the wild.
This helps reduce flakes where a test starts an HTTP server and makes a
request using fetch, then shuts down the server, then starting a new
test with a new server, but the connection pool still has a "not quite
closed yet" connection to the old server, and a new request to the new
server gets sent on the closed connection, which obviously errors out.
Right now, if one of the linters fails, the all other ones continue
running in the background. This fixes this by waiting until all linters
are done before settling.
Previously could flake on the op sanitizer because the
`await makeTempFile()` promise could leak out of the test. Now we ensure
the request is fully handled before returning.
We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.
This patch ~5% perf boost on small ReadableStream:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 148.32us 108.95us 3.88ms 95.71%
Req/Sec 33.24k 2.68k 37.94k 73.76%
668188 requests in 10.10s, 77.74MB read
Requests/sec: 66162.91
Transfer/sec: 7.70MB
```
main:
```
Running 10s test @ http://localhost:8080/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 150.23us 67.61us 4.39ms 94.80%
Req/Sec 31.81k 1.55k 35.56k 83.17%
639078 requests in 10.10s, 74.36MB read
Requests/sec: 63273.72
Transfer/sec: 7.36MB
```