When a TCP connection is force-closed (ie: browser refresh), the
underlying future we pass to Hyper is dropped which may cause us to try
to drop the body resource while the OpState lock is still held.
Preconditions for this bug to trigger:
- The body resource must have been taken
- The response must return a resource (which requires us to take the
OpState lock)
- The TCP connection must have been dropped before this
Fixes #20315 and #20298
This patch adds a `remote` backend for `ext/kv`. This supports
connection to Deno Deploy and potentially other services compatible with
the KV Connect protocol.
Deno.serve's fast streaming implementation was not keeping the request
body resource ID alive. We were taking the `Rc<Resource>` from the
resource table during the response, so a hairpin duplex response that
fed back the request body would work.
However, if any JS code attempted to read from the request body (which
requires the resource ID to be valid), the response would fail with a
difficult-to-diagnose "EOF" error.
This was affecting more complex duplex uses of `Deno.fetch` (though as
far as I can tell was unreported).
Simple test:
```ts
const reader = request.body.getReader();
return new Response(
new ReadableStream({
async pull(controller) {
const { done, value } = await reader.read();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
}),
```
And then attempt to use the stream in duplex mode:
```ts
async function testDuplex(
reader: ReadableStreamDefaultReader<Uint8Array>,
writable: WritableStreamDefaultWriter<Uint8Array>,
) {
await writable.write(new Uint8Array([1]));
const chunk1 = await reader.read();
assert(!chunk1.done);
assertEquals(chunk1.value, new Uint8Array([1]));
await writable.write(new Uint8Array([2]));
const chunk2 = await reader.read();
assert(!chunk2.done);
assertEquals(chunk2.value, new Uint8Array([2]));
await writable.close();
const chunk3 = await reader.read();
assert(chunk3.done);
}
```
In older versions of Deno, this would just lock up. I believe after
23ff0e722e, it started throwing a more
explicit error:
```
httpServerStreamDuplexJavascript => ./cli/tests/unit/serve_test.ts:1339:6
error: TypeError: request or response body error: error reading a body from connection: Connection reset by peer (os error 54)
at async Object.pull (ext:deno_web/06_streams.js:810:27)
```
Properly handle the `SQLITE_BUSY` error code by retrying the
transaction.
Also wraps database initialization logic in a transaction to protect
against incomplete/concurrent initializations.
Fixes https://github.com/denoland/deno/issues/20116.
The goal of this PR is to address issue #19520 where Deno panics when
encountering an invalid SSL certificate.
This PR achieves that goal by removing an `.expect()` statement and
implementing a match statement on `tsl_config` (found in
[/ext/net/ops_tsl.rs](e071382768/ext/net/ops_tls.rs (L1058)))
to check whether the desired configuration is valid
---------
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
This PR fixes #19818. The problem was that the new InnerRequest class does not initialize the fields urlList and urlListProcessed that are used during a request clone. The solution aims to be straightforward by simply initializing the missing properties during the clone process. I also implemented a "cache" to the url getter of the new InnerRequest, avoiding the cost of calling op_http_get_request_method_and_url.
This commit stabilizes "Deno.serve()", which becomes the
preferred way to create HTTP servers in Deno.
Documentation was adjusted for each overload of "Deno.serve()"
API and the API always binds to "127.0.0.1:8000" by default.
This PR changes Web IDL interfaces to be declared with `var` instead of
`class`, so that accessing them via `globalThis` does not raise type
errors.
Closes #13390.
This is a fix for issue #19644, concerning the `parseCssColor` function
in the file `ext/console/01_console.js`. Changes made on lines
2756-2758. To sum it up:
> The internal `parseCssColor` function currently parses 3/4-digit hex
colors incorrectly. For example, it parses the string `#FFFFFF` as
`[255, 255, 255]` (as expected), but returns `[240, 240, 240]` for
`#FFF`, when it should return the same triplet as the former.
While it's not going to cause a fatal runtime error, it did bug me
enough to fix it real quick.
Reduce the GC pressure from the websocket event method by splitting it
into an event getter and a buffer getter.
Before:
165.9k msg/sec
After:
169.9k msg/sec
Related issue: https://github.com/denoland/deno/issues/19358.
This is a regression that seems to have been introduced in
https://github.com/denoland/deno/pull/18905. It looks to have been a
performance optimization.
The issue is probably easiest described with some code:
```ts
const target = new EventTarget();
const event = new Event("foo");
target.addEventListener("foo", () => {
console.log('base');
target.addEventListener("foo", () => {
console.log('nested');
});
});
target.dispatchEvent(event);
```
Essentially, the second event listener is being attached while the `foo`
event is still being dispatched. It should then not fire that second
event listener, but Deno currently does.
For the first implementation of node:http2, we'll use the internal
version of `Deno.serve` which allows us to listen on a raw TCP
connection rather than a listener.
This is mostly a refactoring, and hooking up of `op_http_serve_on` that
was never previously exposed (but designed for this purpose).
`isFile`, `isDirectory`, `isSymlink` are defined in `Deno.FileInfo`, but
`isBlockDevice`, `isCharacterDevice`, `isFIFO`, `isSocket` are not
defined.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This runs our `js_unit_tests` and `node_unit_tests` in parallel, one
rust test per JS unit test file. Some of our JS tests don't like running
in parallel due to port requirements, so this also makes those use a
specific port-per-file. This does not attempt to make the node-compat
tests work.
This commit changes the implementation of `ext/web` timers, by using
"op_void_async_deferred" for timeouts of 0ms.
0ms timeout is meant to be run at the end of the event loop tick and
currently Tokio timers that we use to back timeouts have at least 1ms
resolution. That means that 0ms timeout actually take >1ms. This
commit changes that and runs 0ms timeout at the end of the event
loop tick.
One consequence is that "unrefing" a 0ms timer will actually keep
the event loop alive (which I believe actually makes sense, the test
we had only worked because the timeout took more than 1ms).
Ref https://github.com/denoland/deno/issues/19034
This commit changes the return type of an unstable `Deno.serve()` API
to instead return a `Deno.Server` object that has a `finished` field.
This change is done in preparation to be able to ref/unref the HTTP
server.
Fixes for various `Attemped to access invalid request` bugs (#19058,
#15427, #17213).
We did not wait for both a drop event and a completion event before
removing items from the slab table. This ensures that we do so.
In addition, the slab methods are refactored out into `slab.rs` for
maintainability.
Currently the `multipart/form-data` parser in
`Request.prototype.formData` and `Response.prototype.formData` decodes
non-ASCII filenames incorrectly, as if they were encoded in Latin-1
rather than UTF-8. This happens because the header section of each
`multipart/form-data` entry is decoded as Latin-1 in order to be parsed
with `Headers`, which only allows `ByteString`s, but the names and
filenames are never decoded correctly. This PR fixes this as a
post-processing step.
Note that the `multipart/form-data` parsing for this APIs in the Fetch
spec is very much underspecified, and it does not specify that names and
filenames must be decoded as UTF-8. However, it does require that the
bodies of non-`File` entries are decoded as UTF-8, and in browsers,
names and filenames always use the same encoding as the body.
Closes #19142.
`Content-Encoding: gzip` support for `Deno.serve`. This doesn't support
Brotli (`br`) yet, however it should not be difficult to add. Heuristics
for compression are modelled after those in `Deno.serveHttp`.
Tests are provided to ensure that the gzip compression is correct. We
chunk a number of different streams (zeros, hard-to-compress data,
already-gzipped data) in a number of different ways (regular, random,
large/small, small/large).
Fixes #16699 and #18960 by ensuring that we release our HTTP
`spawn_local` tasks when the HTTP resource is dropped.
Because our cancel handle was being projected from the resource via
`RcMap`, the resource was never `Drop`ped. By splitting the handle out
into its own `Rc`, we can avoid keeping the resource alive and let it
drop to cancel everything.
`new Deno.KvU64(1n) + 2n == 3n` is now true.
`new Deno.KvU64(1n)` is now inspected as `[Deno.KvU64: 1n]`
(`Object(1n)` is inspected as `[BigInt: 1n]`).
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This implements HTTP/2 prior-knowledge connections, allowing clients to
request HTTP/2 over plaintext or TLS-without-ALPN connections. If a
client requests a specific protocol via ALPN (`h2` or `http/1.1`),
however, the protocol is forced and must be used.
This is a rewrite of the `Deno.serve` API to live on top of hyper
1.0-rc3. The code should be more maintainable long-term, and avoids some
of the slower mpsc patterns that made the older code less efficient than
it could have been.
Missing features:
- `upgradeHttp` and `upgradeHttpRaw` (`upgradeWebSocket` is available,
however).
- Automatic compression is unavailable on responses.
Fixes https://github.com/denoland/deno/issues/18700
Timeline of the events that lead to the bug.
1. WebSocket handshake complete
2. Server on `read_frame` holding an AsyncRefCell borrow of the
WebSocket stream.
3. Client sends a TXT frame after a some time
4. Server recieves the frame and goes back to `read_frame`.
5. After some time, Server starts a `write_frame` but `read_frame` is
still holding a borrow!
^--- Locked. read_frame needs to complete so we can resume the write.
This commit changes all writes to directly borrow the
`fastwebsocket::WebSocket` resource under the assumption that it won't
affect ongoing reads.
This commit updates the `Deno.Kv` API to return the new commited
versionstamp for the mutated data from `db.set` and `ao.commit`. This is
returned in the form of a `Deno.KvCommitResult` object that has a
`versionstamp` property.
Previously the mapping between `AnyValue::Bool` and `KeyPart::Bool` was
inverted.
This patch also allows using the empty key `[]` as range start/end to
`snapshot_read`.