1
0
Fork 0
mirror of https://github.com/denoland/deno.git synced 2024-11-25 15:29:32 -05:00
Commit graph

1448 commits

Author SHA1 Message Date
denobot
72c276d61a
1.37.1 (#20703)
Bumped versions for 1.37.1

Co-authored-by: littledivy <littledivy@users.noreply.github.com>
2023-09-27 11:54:44 +05:30
Igor Zinkovsky
f0a022bed4
fix(kv_queues): graceful shutdown (#20627)
This fixes the `TypeError: Database closed` error during shutdown.
2023-09-26 20:06:57 -07:00
David Sherret
dcb00bb9b8
chore: slight cleanup in npm resolvers (#20692) 2023-09-26 16:42:39 -04:00
Bartek Iwańczuk
3c0d6de155
refactor: rewrite ext/node/crypto to op2 macro (#20675) 2023-09-26 12:07:04 +00:00
Laurence Rowe
8fcea5966c
refactor(ext/http): use scopeguard defer to handle async drop (#20652)
Use the [scopeguard](https://docs.rs/scopeguard/) defer macro to run
cleanup code for `new_slab_future`.
This means it can be a single async function, avoiding the need to
create a struct and implement `PinnedDrop`

Async cleanup in Rust is awkward because async functions may be
cancelled at any await point when their Future is dropped.
The scopeguard approach comes from the following articles:
* [How to think about `async`/`await` in
Rust](http://cliffle.com/blog/async-inversion/)
* [Async Cancellation
I](https://blog.yoshuawuyts.com/async-cancellation-1/) (Reddit
[discussion](https://www.reddit.com/r/rust/comments/qrhg39/blog_post_async_cancellation/))
2023-09-26 05:42:48 -06:00
Matt Mastracci
a27ee8f368
fix(ext/http): ensure that resources are closed when request is cancelled (#20641)
Builds on top of #20622 to fix #10854
2023-09-25 17:23:55 +02:00
Bartek Iwańczuk
b2abae4771
refactor: rewrite more ops to op2 (#20666) 2023-09-24 22:07:22 +00:00
Aapo Alasuutari
cb9ab9c3ac
fix(ext/node): Fix invalid length variable reference in blitBuffer (#20648) 2023-09-24 13:48:23 +03:00
Mikhail
0e2637f851
fix(ext/node): simplified array.from + map (#20653)
`Array.from` has optional second argument. Calling `map` is not required
for this case.
2023-09-24 11:23:25 +02:00
Bartek Iwańczuk
68851d6f37
refactor: rewrite ops to op2 macro (#20628)
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-09-23 19:33:31 +00:00
Bartek Iwańczuk
99dd8097c3
refactor: rewrite ext/node/http2 to op2 macro (#20629) 2023-09-23 10:25:36 -06:00
Matt Mastracci
06297d952d
feat(ext/web): use readableStreamDefaultReaderRead in resourceForReadableStream (#20622)
We can go one level down in abstraction and avoid using the public
`ReadableStream` APIs.

This patch ~5% perf boost on small ReadableStream:

```
Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   148.32us  108.95us   3.88ms   95.71%
    Req/Sec    33.24k     2.68k   37.94k    73.76%
  668188 requests in 10.10s, 77.74MB read
Requests/sec:  66162.91
Transfer/sec:      7.70MB
```

main:

```
Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   150.23us   67.61us   4.39ms   94.80%
    Req/Sec    31.81k     1.55k   35.56k    83.17%
  639078 requests in 10.10s, 74.36MB read
Requests/sec:  63273.72
Transfer/sec:      7.36MB
```
2023-09-23 14:55:28 +00:00
Bartek Iwańczuk
1ad097c4bf
refactor: rewrite ops using i64/usize to op2 (#20647) 2023-09-23 14:04:47 +02:00
Divy Srivastava
75a724890d
fix(node): supported arguments to randomFillSync (#20637)
Fixes https://github.com/denoland/deno/issues/20634
2023-09-23 10:04:55 +02:00
Igor Zinkovsky
035df85732
feat(kv_queues): increase max queue delay to 30 days (#20626) 2023-09-22 09:40:35 -07:00
Alessandro Scandone
15cfb67551
fix(node/package_json): Avoid panic when "exports" field is null (#20588)
Fixes #20558

Implementation: when package.json `exports` field is `null`, treat it as
if it was not set
2023-09-22 11:21:38 +02:00
Marcos Casagrande
65a94a6176
perf(ext/fetch): use new instead of createBranded (#20624)
This PR optimizes `fromInner*` methods of `Request` / `Header` /
`Response` used by `Deno.serve` and `fetch` by using `new` instead of
`ObjectCreate` from `createBranded`.

The "brand" is created by passing `webidl.brand` to the constructor
instead.


142449ecab/ext/webidl/00_webidl.js (L1001-L1005)

### Benchmark
```js
const createBranded = Symbol("create branded");
const brand = Symbol("brand");
class B {
  constructor(init) {
    if (init === createBranded) {
      this[brand] = brand;
    }
  }
}

Deno.bench("Object.create(protoype)", () => {
  Object.create(B.prototype);
});

Deno.bench("new Class", () => {
  new B(createBranded);
});
```

```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.37.0 (x86_64-unknown-linux-gnu)

benchmark                    time (avg)        iter/s             (min … max)       p75       p99      p995
----------------------------------------------------------------------------- -----------------------------
Object.create(protoype)       8.74 ns/iter 114,363,610.3    (7.32 ns … 26.02 ns)   8.65 ns  13.39 ns  14.47 ns
new Class                     3.05 ns/iter 328,271,012.2      (2.78 ns … 9.1 ns)   3.06 ns   3.46 ns    3.5 ns
```
2023-09-21 20:06:42 -06:00
Bartek Iwańczuk
142449ecab
refactor: rewrite some ops to op2 macro (#20603) 2023-09-21 08:08:23 -06:00
Divy Srivastava
cf6f649829
fix(node): point process.version to Node 18.18.0 LTS (#20597)
Fixes https://github.com/denoland/deno/issues/20590
2023-09-21 06:44:37 +00:00
Matt Mastracci
0981aefbdc
fix(ext/web): Aggregate small packets for Resource implementation of ReadableStream (#20570)
Fixes: #20569 by introducing a custom replacement for the tokio mpsc
channel that is byte-size backpressure-aware.

Using the testcase in the linked bug, we see all the small writes
aggregated into a single packet and HTTP frame.

```
10:39 $ nc localhost 8000
GET / HTTP/1.1

HTTP/1.1 200 OK
content-type: text/plain
vary: Accept-Encoding
transfer-encoding: chunked
date: Tue, 19 Sep 2023 16:39:13 GMT

A
0
1
2
3
4
```

This patch:

```
Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   157.47us  194.89us   9.53ms   98.97%
    Req/Sec    31.37k     1.56k   34.73k    85.15%
  630407 requests in 10.10s, 73.35MB read
Requests/sec:  62428.12
Transfer/sec:      7.26MB
```

main:

```
Running 10s test @ http://localhost:8080/
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   343.75us  200.48us  10.41ms   98.25%
    Req/Sec    14.64k   806.52    16.98k    84.65%
  294018 requests in 10.10s, 39.82MB read
Requests/sec:  29109.91
Transfer/sec:      3.94MB
```

---------

Co-authored-by: Bert Belder <bertbelder@gmail.com>
2023-09-20 11:23:58 -06:00
Bartek Iwańczuk
d77f3fba03
refactor: rewrite BC, cache exts to op2 (#20486)
Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-09-19 20:39:27 -06:00
denobot
997aa604df
1.37.0 (#20574)
Co-authored-by: David Sherret <dsherret@gmail.com>
2023-09-19 20:29:17 +00:00
Matt Mastracci
612818d043
fix(cli): ensure that an exception in getOwnPropertyDescriptor('constructor') doesn't break Deno.inspect (#20568)
Fixes #20561
2023-09-19 18:24:19 +00:00
Marcos Casagrande
4960b6659c
perf(ext/streams): optimize async iterator (#20541)
This PR optimizes `ReadableStream` async iterator

### Benchmarks

```js
Deno.bench("Stream - iterator", async () => {
  const stream = new ReadableStream({
    start(controller) {
      controller.enqueue(new Uint8Array([97]));
      controller.enqueue(new Uint8Array([97]));
      controller.close();
    },
  });

  for await (const chunk of stream) {}
});
```

**main**

`2 chunks`
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.4 (x86_64-unknown-linux-gnu)

benchmark              time (avg)        iter/s             (min … max)       p75       p99      p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator      12.45 µs/iter      80,295.5   (10.5 µs … 281.12 µs)  12.13 µs  26.71 µs  33.63 µs
```
`20 chunks`

```
benchmark              time (avg)        iter/s             (min … max)       p75       p99      p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator      32.99 µs/iter      30,312.2    (28.13 µs … 1.21 ms)   31.8 µs  81.82 µs 179.93 µs
```
---

**this PR**

`2 chunks`
```
cpu: 13th Gen Intel(R) Core(TM) i9-13900H
runtime: deno 1.36.4 (x86_64-unknown-linux-gnu)

benchmark              time (avg)        iter/s             (min … max)       p75       p99      p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator       9.37 µs/iter     106,700.8   (8.35 µs … 730.71 µs)   9.15 µs  13.12 µs  18.17 µs
```
`20 chunks`
```
benchmark              time (avg)        iter/s             (min … max)       p75       p99      p995
----------------------------------------------------------------------- -----------------------------
Stream - iterator      16.59 µs/iter      60,270.0    (12.08 µs … 1.37 ms)  15.06 µs  83.03 µs 123.52 µs
```
2023-09-17 15:54:40 +00:00
Marcos Casagrande
16b7c9cd8d
perf(ext/http): optimize set_response for small responses (#20527)
This PR introduces an optimization to `set_response` to reduce the
overhead for responses with a payload size less than 64 bytes.
Performance gains are more noticeable when `is_request_compressible`
enters the slow path, ie: `-H 'Accept-Encoding: unknown'`

### Benchmarks
```js
Deno.serve({ port: 3000 }, () => new Response("hello"));
```
```
wrk -d 10s --latency -H 'Accept-Encoding: slow' http://127.0.0.1:3000
```
---
**main**
```
Running 10s test @ http://127.0.0.1:3000
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    44.72us   28.12us   3.10ms   97.95%
    Req/Sec   112.73k     8.25k  123.66k    91.09%
  2264092 requests in 10.10s, 308.77MB read
Requests/sec: 224187.08
Transfer/sec:     30.57MB
```
**this PR**
```
Running 10s test @ http://127.0.0.1:3000
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    42.91us   20.57us   2.04ms   97.36%
    Req/Sec   116.61k     7.95k  204.81k    88.56%
  2330970 requests in 10.10s, 317.89MB read
Requests/sec: 230806.32
Transfer/sec:     31.48MB
```
2023-09-16 15:15:15 -06:00
Luca Casonato
430b63c2c4
perf: improve async op santizer speed and accuracy (#20501)
This commit improves async op sanitizer speed by only delaying metrics
collection if there are pending ops. This
results in a speedup of around 30% for small CPU bound unit tests.

It performs this check and possible delay on every collection now,
fixing an issue with parent test leaks into steps.
2023-09-16 07:48:31 +02:00
Bartek Iwańczuk
bf07604113
feat: Add "deno jupyter" subcommand (#20337)
This commit adds "deno jupyter" subcommand which
provides a Deno kernel for Jupyter notebooks.

The implementation is mostly based on Deno's REPL and
reuses large parts of it (though there's some clean up that
needs to happen in follow up PRs). Not all functionality of
Jupyter kernel is implemented and some message type
are still not implemented (eg. "inspect_request") but
the kernel is fully working and provides all the capatibilities
that the Deno REPL has; including TypeScript transpilation
and npm packages support.

Closes https://github.com/denoland/deno/issues/13016

---------

Co-authored-by: Adam Powers <apowers@ato.ms>
Co-authored-by: Kyle Kelley <rgbkrk@gmail.com>
2023-09-16 02:42:09 +02:00
Bartek Iwańczuk
5a1505db67
feat(ext/node): http2.connect() API (#19671)
This commit improves compatibility of "node:http2" module by polyfilling
"connect" method and "ClientHttp2Session" class. Basic operations like
streaming, header and trailer handling are working correctly. 
Refing/unrefing is still a TODO and "npm:grpc-js/grpc" is not yet working
correctly.

---------

Co-authored-by: Matt Mastracci <matthew@mastracci.com>
2023-09-15 21:51:25 +02:00
Matt Mastracci
71af3c375c
fix(ext/http): ensure aborted bodies throw (#20503)
Fixes #20502 -- ensure that Hyper errors make it through to JS.
2023-09-15 08:08:21 -06:00
Bartek Iwańczuk
5e7435fb80
refactor: rewrite more ops to op2 macro (#20478) 2023-09-14 23:05:18 +02:00
Bartek Iwańczuk
bbb348aa33
refactor: rewrite ext/node to op2 (#20489) 2023-09-14 08:29:44 +02:00
lionel-rowe
2046aeed70
feat(ext/web): Add name to Deno.customInspect of File objects (#20415)
Fixes https://github.com/denoland/deno/issues/20414
2023-09-14 07:06:58 +02:00
David Sherret
e60cbfadc0
refactor: use TaskQueue from deno_unsync (#20485) 2023-09-13 23:36:24 -04:00
Matt Mastracci
81d50e1b66
chore: bump deno_core and cargo update (#20480)
Bump deno_core, pulling in new rusty_v8. Requires some op2/deprecation
fixes.
2023-09-13 22:01:31 +00:00
Bartek Iwańczuk
109a42ab07
refactor: rewrite ext/crypto to op2 (#20477) 2023-09-13 17:54:19 +02:00
Bartek Iwańczuk
08d2a32060
refactor: rewrite ext/net/ ops to op2 (#20471) 2023-09-12 15:39:21 +02:00
Bartek Iwańczuk
f32acb945e
refactor: rewrite ext/io, ext/webstorage ops to op2 (#20461) 2023-09-12 12:42:05 +02:00
Matt Mastracci
950e0e9cd6
fix(ext/http): create a graceful shutdown API (#20387)
This PR implements a graceful shutdown API for Deno.serve, allowing all
current connections to drain from the server before shutting down, while
preventing new connections from being started or new transactions on
existing connections from being created.

We split the cancellation handle into two parts: a listener handle, and
a connection handle. A graceful shutdown cancels the listener only,
while allowing the connections to drain. The connection handle aborts
all futures. If the listener handle is cancelled, we put the connections
into graceful shutdown mode, which disables keep-alive on http/1.1 and
uses http/2 mechanisms for http/2 connections.

In addition, we now guarantee that all connections are complete or
cancelled, and all resources are cleaned up when the server `finished`
promise resolves -- we use a Rust-side server refcount for this.

Performance impact: does not appear to affect basic serving performance
by more than 1% (~126k -> ~125k)

---------

Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-09-12 00:06:38 +00:00
Matt Mastracci
bfd230fd78
chore: update inner #\![allow] to #[allow] (#20463)
Functions should generally be annotated with `#[allow]` blocks rather
than using inner `#![allow]` annotations.
2023-09-11 17:12:33 -06:00
Bartek Iwańczuk
aaff69db3f
perf(node/net): optimize socket reads for 'npm:ws' package (#20449)
Fixes performance regression introduced by
https://github.com/denoland/deno/pull/20223 and
https://github.com/denoland/deno/pull/20314. It's enough to have one
"shared" buffer per socket
and no locking mechanism is required.
2023-09-11 20:38:57 +02:00
Heyang Zhou
375d8a5bd5
fix(ext/kv): same expireIn should generate same expireAt (#20396)
Co-authored-by: Luca Casonato <hello@lcas.dev>
2023-09-08 23:20:28 +08:00
Curran McConnell
a9cc4631ca
fix(ext/node/ops/zlib/brotli): Allow decompressing more than 4096 bytes (#20301)
Fixes https://github.com/denoland/deno/issues/19816

In that issue, I suggest switching over the other brotli functionality
to the Rust API provided by the `brotli` crate. Here, I only do that
with the `brotli_decompress` function to fix the bug with buffers longer
than 4096 bytes.
2023-09-08 09:11:33 +05:30
Aapo Alasuutari
9d6584c16f
perf(ext/node): Optimise Buffer string operations (#20158)
Extracted from https://github.com/denoland/deno/pull/17815

Optimise Buffer's string operations, most significantly when dealing
with ASCII and UTF-16. Base64 and HEX encodings are affected to much
lesser degrees.

## Performance

### String length 15
With very small strings we're at break-even or sometimes even lose a tad
bit of performance from creating a `DataView` that ends up not paying
for itself.

**This PR:**
```
benchmark                                                     time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                       1.15 µs/iter     871,388.6   (728.78 ns … 1.56 µs)   1.23 µs   1.56 µs   1.56 µs
Buffer.from base64 string                                      1.63 µs/iter     612,790.9     (1.31 µs … 1.96 µs)   1.77 µs   1.96 µs   1.96 µs
Buffer.from utf16 string                                       1.41 µs/iter     707,396.3   (915.24 ns … 1.93 µs)   1.61 µs   1.93 µs   1.93 µs
Buffer.from hex string                                         1.87 µs/iter     535,357.9     (1.56 µs … 2.19 µs)      2 µs   2.19 µs   2.19 µs
Buffer.toString ascii string                                 154.58 ns/iter   6,469,162.8    (149.69 ns … 198 ns) 154.51 ns 182.89 ns 191.91 ns
Buffer.toString base64 string                                161.65 ns/iter   6,186,189.6 (150.91 ns … 181.15 ns) 165.18 ns 171.87 ns 174.94 ns
Buffer.toString utf16 string                                 292.74 ns/iter   3,415,959.8 (285.43 ns … 312.47 ns) 295.25 ns 310.47 ns 312.47 ns
Buffer.toString hex string                                    89.61 ns/iter  11,159,315.6  (81.09 ns … 123.77 ns)  91.09 ns 113.62 ns 119.28 ns
```

**Main:**
```
benchmark                                                     time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                       1.26 µs/iter     794,875.8     (1.07 µs … 1.46 µs)   1.31 µs   1.46 µs   1.46 µs
Buffer.from base64 string                                      1.65 µs/iter     607,853.3     (1.38 µs … 2.01 µs)   1.69 µs   2.01 µs   2.01 µs
Buffer.from utf16 string                                       1.34 µs/iter     744,894.6     (1.09 µs … 1.55 µs)   1.45 µs   1.55 µs   1.55 µs
Buffer.from hex string                                         2.01 µs/iter     496,345.8      (1.54 µs … 2.6 µs)   2.26 µs    2.6 µs    2.6 µs
Buffer.toString ascii string                                 150.16 ns/iter   6,659,630.5 (144.99 ns … 166.68 ns)  152.4 ns 157.26 ns 159.14 ns
Buffer.toString base64 string                                164.73 ns/iter   6,070,692.0 (158.77 ns … 185.63 ns) 168.48 ns 175.74 ns 176.68 ns
Buffer.toString utf16 string                                 150.61 ns/iter   6,639,864.0  (148.2 ns … 168.29 ns) 150.93 ns 157.21 ns 168.15 ns
Buffer.toString hex string                                    94.21 ns/iter  10,614,972.9   (86.21 ns … 98.75 ns)  95.43 ns  97.99 ns  98.21 ns
```

### String length 1500
With moderate lengths we already see great upsides for `Buffer.from()`
with ASCII and UTF-16.

**This PR:**
```
benchmark                                                     time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                       5.79 µs/iter     172,562.6     (4.72 µs … 4.71 ms)   5.04 µs   10.3 µs  11.67 µs
Buffer.from base64 string                                      5.08 µs/iter     196,678.9     (4.97 µs … 5.76 µs)   5.08 µs   5.76 µs   5.76 µs
Buffer.from utf16 string                                       9.68 µs/iter     103,316.5     (7.14 µs … 3.44 ms)  10.32 µs  13.42 µs  15.21 µs
Buffer.from hex string                                         53.7 µs/iter      18,620.2     (49.37 µs … 2.2 ms)  54.74 µs   72.2 µs  81.07 µs
Buffer.toString ascii string                                   6.63 µs/iter     150,761.3     (5.59 µs … 1.11 ms)   6.08 µs  15.68 µs  24.77 µs
Buffer.toString base64 string                                460.57 ns/iter   2,171,224.4 (448.33 ns … 511.73 ns) 465.05 ns 495.54 ns 511.73 ns
Buffer.toString utf16 string                                   6.52 µs/iter     153,287.0     (6.47 µs … 6.66 µs)   6.53 µs   6.66 µs   6.66 µs
Buffer.toString hex string                                     3.68 µs/iter     271,965.4     (3.64 µs … 3.82 µs)   3.68 µs   3.82 µs   3.82 µs
```

**Main:**
```
benchmark                                                     time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                      11.46 µs/iter      87,298.1    (8.53 µs … 834.1 µs)   9.61 µs  83.31 µs   87.3 µs
Buffer.from base64 string                                       5.4 µs/iter     185,027.8     (5.07 µs … 7.49 µs)   5.44 µs   7.49 µs   7.49 µs
Buffer.from utf16 string                                       20.3 µs/iter      49,270.8  (13.55 µs … 649.11 µs)   18.8 µs 113.93 µs 125.17 µs
Buffer.from hex string                                        52.03 µs/iter      19,218.9    (48.74 µs … 2.59 ms)  52.84 µs  67.05 µs  73.56 µs
Buffer.toString ascii string                                   6.46 µs/iter     154,822.5     (6.32 µs … 6.69 µs)   6.52 µs   6.69 µs   6.69 µs
Buffer.toString base64 string                                440.19 ns/iter   2,271,764.6    (427 ns … 490.77 ns) 444.74 ns 484.64 ns 490.77 ns
Buffer.toString utf16 string                                   6.89 µs/iter     145,106.7     (6.81 µs … 7.24 µs)   6.91 µs   7.24 µs   7.24 µs
Buffer.toString hex string                                     3.66 µs/iter     273,456.5      (3.6 µs … 4.02 µs)   3.64 µs   4.02 µs   4.02 µs
```

### String length 2^20
With massive lengths we the difference in ASCII and UTF-16 parsing
performance is enormous.

**This PR:**
```
benchmark                                                           time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                              4.1 ms/iter         243.7     (2.64 ms … 6.74 ms)   4.43 ms   6.26 ms   6.74 ms
Buffer.from base64 string                                            3.74 ms/iter         267.6     (2.91 ms … 4.92 ms)   3.96 ms   4.31 ms   4.92 ms
Buffer.from utf16 string                                             7.72 ms/iter         129.5    (5.91 ms … 11.03 ms)   7.97 ms  11.03 ms  11.03 ms
Buffer.from hex string                                              35.72 ms/iter          28.0   (34.71 ms … 38.42 ms)  35.93 ms  38.42 ms  38.42 ms
Buffer.toString ascii string                                        78.92 ms/iter          12.7   (42.72 ms … 94.13 ms)  91.64 ms  94.13 ms  94.13 ms
Buffer.toString base64 string                                      833.62 µs/iter       1,199.6   (638.05 µs … 5.97 ms) 826.86 µs   2.45 ms   2.48 ms
Buffer.toString utf16 string                                        79.35 ms/iter          12.6    (69.72 ms … 88.9 ms)  86.66 ms   88.9 ms   88.9 ms
Buffer.toString hex string                                          31.04 ms/iter          32.2      (4.3 ms … 46.9 ms)  37.21 ms   46.9 ms   46.9 ms
```

**Main:**
```
benchmark                                                           time (avg)        iter/s             (min … max)       p75       p99      p995
-------------------------------------------------------------------------------------------------------------------- -----------------------------
Buffer.from ascii string                                            18.66 ms/iter          53.6   (15.61 ms … 23.26 ms)  20.62 ms  23.26 ms  23.26 ms
Buffer.from base64 string                                             4.7 ms/iter         212.9     (2.94 ms … 9.07 ms)   4.65 ms   9.06 ms   9.07 ms
Buffer.from utf16 string                                            33.49 ms/iter          29.9   (31.24 ms … 35.67 ms)  34.08 ms  35.67 ms  35.67 ms
Buffer.from hex string                                              39.38 ms/iter          25.4   (38.66 ms … 42.36 ms)  39.58 ms  42.36 ms  42.36 ms
Buffer.toString ascii string                                        77.68 ms/iter          12.9   (67.46 ms … 95.68 ms)  84.71 ms  95.68 ms  95.68 ms
Buffer.toString base64 string                                      825.53 µs/iter       1,211.3   (655.38 µs … 6.69 ms) 816.62 µs   3.07 ms   3.13 ms
Buffer.toString utf16 string                                        76.54 ms/iter          13.1    (66.9 ms … 85.26 ms)  83.63 ms  85.26 ms  85.26 ms
Buffer.toString hex string                                          38.56 ms/iter          25.9   (33.83 ms … 46.56 ms)  45.33 ms  46.56 ms  46.56 ms
```
2023-09-07 14:41:16 -06:00
Matt Mastracci
29784df24e
chore(ext/fs): port some ops to op2 (#20402)
Port as many of these ops as we can to `op2`. Waiting on a few
`deno_core` updates to complete this file.
2023-09-07 13:19:20 -06:00
Matt Mastracci
9226207c01
chore(ext/node): port some ops to op2 (#20400) 2023-09-07 10:56:02 -06:00
David Sherret
3fc19dab47
feat: support import attributes (#20342) 2023-09-07 09:09:16 -04:00
Heyang Zhou
01a761f1d4
chore(ext/kv): limit total key size in an atomic op to 80 KiB (#20395)
Keys are expensive metadata. We track it for various purposes, e.g.
transaction conflict check, and key expiration.

This patch limits the total key size in an atomic operation to 80 KiB
(81920 bytes). This helps ensure efficiency in implementations.
2023-09-07 15:07:04 +08:00
Divy Srivastava
9befa566ec
fix(ext/node): implement AES GCM cipher (#20368)
Adds support for AES-GCM 128/256 bit keys in `node:crypto` and
`setAAD()`, `setAuthTag()` and `getAuthTag()`

Uses https://github.com/littledivy/aead-gcm-stream

Fixes https://github.com/denoland/deno/issues/19836
https://github.com/denoland/deno/issues/20353
2023-09-06 11:01:50 +05:30
zuisong
4a561f12db
fix(node/child_process): don't crash on undefined/null value of an env var (#20378)
Fixes #20373

---------

Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-09-05 12:42:35 +02:00
Bartek Iwańczuk
9e243d22f4
Revert "refactor: rewrite ops that use 'deferred' to use 'op2(async(lazy))' (#20303) (#20370)
This reverts commit
83426be6ee.

Includes a regression test.
2023-09-04 17:05:06 -04:00