When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.
The goal of this PR is to address issue #19520 where Deno panics when
encountering an invalid SSL certificate.
This PR achieves that goal by removing an `.expect()` statement and
implementing a match statement on `tsl_config` (found in
[/ext/net/ops_tsl.rs](e071382768/ext/net/ops_tls.rs (L1058)))
to check whether the desired configuration is valid
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.
This commit stabilizes "Deno.serve()", which becomes the
preferred way to create HTTP servers in Deno.
Documentation was adjusted for each overload of "Deno.serve()"
API and the API always binds to "127.0.0.1:8000" by default.
This PR changes Web IDL interfaces to be declared with `var` instead of
`class`, so that accessing them via `globalThis` does not raise type
errors.
Closes #13390.
This is a fix for issue #19644, concerning the `parseCssColor` function
in the file `ext/console/01_console.js`. Changes made on lines
2756-2758. To sum it up:
> The internal `parseCssColor` function currently parses 3/4-digit hex
colors incorrectly. For example, it parses the string `#FFFFFF` as
`[255, 255, 255]` (as expected), but returns `[240, 240, 240]` for
`#FFF`, when it should return the same triplet as the former.
While it's not going to cause a fatal runtime error, it did bug me
enough to fix it real quick.
Reduce the GC pressure from the websocket event method by splitting it
into an event getter and a buffer getter.
Before:
165.9k msg/sec
After:
169.9k msg/sec
Related issue: https://github.com/denoland/deno/issues/19358.
This is a regression that seems to have been introduced in
https://github.com/denoland/deno/pull/18905. It looks to have been a
performance optimization.
The issue is probably easiest described with some code:
```ts
const target = new EventTarget();
const event = new Event("foo");
target.addEventListener("foo", () => {
console.log('base');
target.addEventListener("foo", () => {
console.log('nested');
});
});
target.dispatchEvent(event);
```
Essentially, the second event listener is being attached while the `foo`
event is still being dispatched. It should then not fire that second
event listener, but Deno currently does.
For the first implementation of node:http2, we'll use the internal
version of `Deno.serve` which allows us to listen on a raw TCP
connection rather than a listener.
This is mostly a refactoring, and hooking up of `op_http_serve_on` that
was never previously exposed (but designed for this purpose).
`isFile`, `isDirectory`, `isSymlink` are defined in `Deno.FileInfo`, but
`isBlockDevice`, `isCharacterDevice`, `isFIFO`, `isSocket` are not
defined.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This runs our `js_unit_tests` and `node_unit_tests` in parallel, one
rust test per JS unit test file. Some of our JS tests don't like running
in parallel due to port requirements, so this also makes those use a
specific port-per-file. This does not attempt to make the node-compat
tests work.
This commit changes the implementation of `ext/web` timers, by using
"op_void_async_deferred" for timeouts of 0ms.
0ms timeout is meant to be run at the end of the event loop tick and
currently Tokio timers that we use to back timeouts have at least 1ms
resolution. That means that 0ms timeout actually take >1ms. This
commit changes that and runs 0ms timeout at the end of the event
loop tick.
One consequence is that "unrefing" a 0ms timer will actually keep
the event loop alive (which I believe actually makes sense, the test
we had only worked because the timeout took more than 1ms).
Ref https://github.com/denoland/deno/issues/19034