1
0
Fork 0
mirror of https://github.com/denoland/deno.git synced 2024-12-22 15:24:46 -05:00
Commit graph

3 commits

Author SHA1 Message Date
Bartek Iwańczuk
5504acea67
feat: add --allow-import flag (#25469)
This replaces `--allow-net` for import permissions and makes the
security sandbox stricter by also checking permissions for statically
analyzable imports.

By default, this has a value of
`--allow-import=deno.land:443,jsr.io:443,esm.sh:443,raw.githubusercontent.com:443,gist.githubusercontent.com:443`,
but that can be overridden by providing a different set of hosts.

Additionally, when no value is provided, import permissions are inferred
from the CLI arguments so the following works because
`fresh.deno.dev:443` will be added to the list of allowed imports:

```ts
deno run -A -r https://fresh.deno.dev
```

---------

Co-authored-by: David Sherret <dsherret@gmail.com>
2024-09-26 01:50:54 +00:00
David Sherret
62e952559f
refactor(permissions): split up Descriptor into Allow, Deny, and Query (#25508)
This makes the permission system more versatile.
2024-09-16 21:39:37 +01:00
Nathan Whitaker
e92a05b551
feat(serve): Opt-in parallelism for deno serve (#24920)
Adds a `parallel` flag to `deno serve`. When present, we spawn multiple
workers to parallelize serving requests.


```bash
deno serve --parallel main.ts
```

Currently on linux we use `SO_REUSEPORT` and rely on the fact that the
kernel will distribute connections in a round-robin manner.

On mac and windows, we sort of emulate this by cloning the underlying
file descriptor and passing a handle to each worker. The connections
will not be guaranteed to be fairly distributed (and in practice almost
certainly won't be), but the distribution is still spread enough to
provide a significant performance increase.

---
(Run on an Macbook Pro with an M3 Max, serving `deno.com`

baseline::
```
❯ wrk -d 30s -c 125 --latency http://127.0.0.1:8000
Running 30s test @ http://127.0.0.1:8000
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   239.78ms   13.56ms 330.54ms   79.12%
    Req/Sec   258.58     35.56   360.00     70.64%
  Latency Distribution
     50%  236.72ms
     75%  248.46ms
     90%  256.84ms
     99%  268.23ms
  15458 requests in 30.02s, 2.47GB read
Requests/sec:    514.89
Transfer/sec:     84.33MB
```

this PR (`with --parallel` flag)
```
❯ wrk -d 30s -c 125 --latency http://127.0.0.1:8000
Running 30s test @ http://127.0.0.1:8000
  2 threads and 125 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   117.40ms  142.84ms 590.45ms   79.07%
    Req/Sec     1.33k   175.19     1.77k    69.00%
  Latency Distribution
     50%   22.34ms
     75%  223.67ms
     90%  357.32ms
     99%  460.50ms
  79636 requests in 30.07s, 12.74GB read
Requests/sec:   2647.96
Transfer/sec:    433.71MB
```
2024-08-14 22:26:21 +00:00