1
0
Fork 0
mirror of https://github.com/denoland/deno.git synced 2024-11-24 15:19:26 -05:00
Commit graph

12 commits

Author SHA1 Message Date
Andreas Deininger
ea121c9a0e
docs: fix typos (#24820)
This PR fixes various typos I spotted in the project.
2024-08-02 13:26:54 +02:00
David Sherret
94f040ac28
fix: bump cache sqlite dbs to v2 for WAL journal mode change (#24030)
In https://github.com/denoland/deno/pull/23955 we changed the sqlite db
journal mode to WAL. This causes issues when someone is running an old
version of Deno using TRUNCATE and a new version because the two fight
against each other.
2024-05-29 18:38:18 +00:00
Bert Belder
de5b47b13c
perf(startup): use WAL journal for sqlite databases in DENO_DIR (#23955)
While investigating poor cold start performance on my GCP VM (32 cores,
130GB SSD), I found that writing to the various sqlite databases in
DENO_DIR was quite slow. The slowness seems to primarily be caused by
excessive latency from a number of `fsync()` calls.

The performance difference is best demonstrated by deleting the sqlite
databases from DENO_DIR while leaving the downloaded sources in place.

The benchmark (see notes below):

```
piscisaureus@bert-us:~/erofs/source$ export DENO_DIR=./.deno
piscisaureus@bert-us:~/erofs/source$ hyperfine --warmup 3   \
  --prepare "rm -rf .deno/*_v1*"                            \
  "deno run -A --cached-only demo.ts"                       \
  "eatmydata deno run -A --cached-only demo.ts"             \
  "~/deno/target/release/deno run -A --cached-only demo.ts"
Benchmark 1: deno run -A --cached-only demo.ts
  Time (mean ± σ):      1.174 s ±  0.037 s    [User: 0.153 s, System: 0.184 s]
  Range (min … max):    1.104 s …  1.212 s    10 runs
 
Benchmark 2: eatmydata deno run -A --cached-only demo.ts
  Time (mean ± σ):     265.5 ms ±   3.6 ms    [User: 138.5 ms, System: 135.1 ms]
  Range (min … max):   260.6 ms … 271.2 ms    11 runs
 
Benchmark 3: ~/deno/target/release/deno run -A --cached-only demo.ts
  Time (mean ± σ):     226.2 ms ±   9.2 ms    [User: 136.7 ms, System: 93.3 ms]
  Range (min … max):   218.8 ms … 247.1 ms    13 runs
 
Summary
  ~/deno/target/release/deno run -A --cached-only demo.ts ran
    1.17 ± 0.05 times faster than eatmydata deno run -A --cached-only demo.ts
    5.19 ± 0.27 times faster than deno run -A --cached-only demo.ts
```

Notes:
* Benchmark 1: unmodified Deno 1.43.6
* Benchmark 2: unmodified Deno 1.43.6 wrapped with `eatmydata` (which is
a tool to neuter `fsync()` calls)
* Benchmark 3: this PR applied on top of Deno 1.43.6


The script that got benchmarked:

```typescript
// demo.ts
import * as express from "npm:express@4.16.3";
import * as postgres from "https://deno.land/x/postgres/mod.ts";

let _dummy = [express, postgres]; // Force use of imports.
console.log("hello world");
```
2024-05-23 00:33:47 -04:00
David Sherret
b0c1bd82a8
fix: prevent cache db errors when deno_dir not exists (#23168)
Closes #20202
2024-04-01 18:58:52 -04:00
David Sherret
7e72f3af61
chore: update copyright to 2024 (#21753) 2024-01-01 19:58:21 +00:00
Matt Mastracci
c272d26ae8
chore(cli): remove atty crate (#20275)
Removes a crate with an outstanding vulnerability.
2023-08-25 07:43:07 -06:00
Matt Mastracci
b1ce2e4167
fix(ext/web): add stream tests to detect v8slice split bug (#20253)
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-08-23 17:03:05 -06:00
David Sherret
42a3f52e98
fix: do not show cache initialization errors if stderr is piped (#18920)
Closes #18918
2023-05-30 13:35:02 -04:00
David Sherret
2ebd61ee1b
fix(compile): handle when DENO_DIR is readonly (#19257)
Closes #19253
2023-05-25 14:27:45 -04:00
Matt Mastracci
9845361153
refactor(core): bake single-thread assumptions into spawn/spawn_blocking (#19056)
Partially supersedes #19016.

This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes
the requirement for `spawn` tasks to be `Send` given our single-threaded
executor.

While we don't need to technically do anything w/`spawn_blocking`, this
allows us to have a single `JoinHandle` type that works for both cases,
and allows us to more easily experiment with alternative
`spawn_blocking` implementations that do not require tokio (ie: rayon).

Async ops (+~35%):

Before: 

```
time 1310 ms rate 763358
time 1267 ms rate 789265
time 1259 ms rate 794281
time 1266 ms rate 789889
```

After:

```
time 956 ms rate 1046025
time 954 ms rate 1048218
time 924 ms rate 1082251
time 920 ms rate 1086956
```

HTTP serve (+~4.4%):

Before:

```
Running 10s test @ http://localhost:4500
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    68.78us   19.77us   1.43ms   86.84%
    Req/Sec    68.78k     5.00k   73.84k    91.58%
  1381833 requests in 10.10s, 167.36MB read
Requests/sec: 136823.29
Transfer/sec:     16.57MB
```

After:

```
Running 10s test @ http://localhost:4500
  2 threads and 10 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    63.12us   17.43us   1.11ms   85.13%
    Req/Sec    71.82k     3.71k   77.02k    79.21%
  1443195 requests in 10.10s, 174.79MB read
Requests/sec: 142921.99
Transfer/sec:     17.31MB
```

Suggested-By: alice@ryhl.io
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
2023-05-14 15:40:01 -06:00
Matt Mastracci
c65149c0a0
fix(core): restore cache journal mode to TRUNCATE and tweak tokio test in CacheDB (#18469)
Fast-follow on #18401 -- the reason that some tests were panicking in
the `CacheDB` `impl Drop` was that the cache itself was being dropped
during panic and the runtime may or may not still exist at that point.
We can reduce the actual tokio runtime testing to where it's needed.

In addition, we return the journal mode to `TRUNCATE` to avoid the risk
of data corruption.
2023-03-28 14:06:57 -06:00
Matt Mastracci
86c3c4f343
feat(core): initialize SQLite off-main-thread (#18401)
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.

The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:

 1. Retry twice to open on-disk caches
 2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).

The caches all use the same general code now, and share the cache
failure recovery policy.

In addition, it cleans up a TODO in the `NodeAnalysisCache`.
2023-03-27 22:01:52 +00:00