This PR updates `deno run` to fallback to executing tasks when there is
no script with the specified name. If there are both script and a task
with the same name then `deno run` will prioritise executing the script.
This moves YAML formatting behind an unstable flag for Deno 1.46. This
will make it opt-in to start and then we can remove the flag to make it
on by default in version of Deno after that.
This can be specified by doing `deno fmt --unstable-yaml` or by
specifying the following in a deno.json file:
```json
{
"unstable": ["fmt-yaml"]
}
```
I noticed
[`set_response_body`](ce42f82b5a/ext/http/service.rs (L439-L443))
was unexpectedly hot in profiles, with most of the time being spent in
`memmove`.
It turns out that `ResponseBytesInner` was _massive_ (5624 bytes), so
every time we moved a `ResponseBytesInner` (for instance in
`set_response_body`) we were doing a >5kb memmove, which adds up pretty
quickly.
This PR boxes the two larger variants (the compression streams),
shrinking `ResponseBytesInner` to a reasonable 48 bytes.
---
Benchmarked with a simple hello world server:
```ts
// hello-server.ts
Deno.serve((_req) => {
return new Response("Hello world");
});
// run with `deno run -A hello-server.ts`
// in separate terminal `wrk -d 10s http://127.0.0.1:8000`
```
Main:
```
Running 10s test @ http://127.0.0.1:8000/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 53.39us 9.53us 0.98ms 92.78%
Req/Sec 86.57k 3.56k 91.58k 91.09%
1739319 requests in 10.10s, 248.81MB read
Requests/sec: 172220.92
Transfer/sec: 24.64MB
```
This PR:
```
Running 10s test @ http://127.0.0.1:8000/
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 45.44us 8.49us 0.91ms 90.04%
Req/Sec 100.65k 2.26k 102.65k 96.53%
2022296 requests in 10.10s, 289.29MB read
Requests/sec: 200226.20
Transfer/sec: 28.64MB
```
So a nice ~15% bump. (With response body compression, the gain is ~10%
for gzip and neutral for brotli)
Uses [sui](https://github.com/littledivy/sui) to inject metadata as a
custom section in the denort binary.
Metadata is stored as a Mach-O segment on macOS and PE `RT_RCDATA`
resource on Windows.
Fixes #11154
Fixes https://github.com/denoland/deno/issues/17753
```cpp
deno compile app.tsx
# on macOS
codesign --sign - ./app
# on Windows
signtool sign /fd SHA256 .\app.exe
```
---------
Signed-off-by: Divy Srivastava <dj.srivastava23@gmail.com>
- upgrade to v8 12.8
- optimizes DataView bigint methods
- fixes global interceptors
- includes CPED methods for ALS
- fix global resolution
- makes global resolution consistent using host_defined_options.
originally a separate patch but due to the global interceptor bug it
needs to be included in this pr for all tests to pass.
Fixes https://github.com/denoland/deno/issues/24756. Fixes
https://github.com/denoland/deno/issues/24796.
This also gets vitest working when using
[`--pool=forks`](https://vitest.dev/guide/improving-performance#pool)
(which is the default as of vitest 2.0). Ref
https://github.com/denoland/deno/issues/23882.
---
This PR resolves a handful of issues with child_process IPC. In
particular:
- We didn't support sending typed array views over IPC
- Opening an IPC channel resulted in the event loop never exiting
- Sending a `null` over IPC would terminate the channel
- There was some UB in the read implementation (transmuting an `&[u8]`
to `&mut [u8]`)
- The `send` method wasn't returning anything, so there was no way to
signal backpressure (this also resulted in the benchmark
`child_process_ipc.mjs` being misleading, as it tried to respect
backpressure. That gave node much worse results at larger message sizes,
and gave us much worse results at smaller message sizes).
- We weren't setting up the `channel` property on the `process` global
(or on the `ChildProcess` object), and also didn't have a way to
ref/unref the channel
- Calling `kill` multiple times (or disconnecting the channel, then
calling kill) would throw an error
- Node couldn't spawn a deno subprocess and communicate with it over IPC