feat(core): initialize SQLite off-main-thread (#18401)
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.
The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:
1. Retry twice to open on-disk caches
2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).
The caches all use the same general code now, and share the cache
failure recovery policy.
In addition, it cleans up a TODO in the `NodeAnalysisCache`.
2023-03-27 18:01:52 -04:00
|
|
|
// Copyright 2018-2023 the Deno authors. All rights reserved. MIT license.
|
|
|
|
|
|
|
|
use std::path::PathBuf;
|
|
|
|
|
|
|
|
use once_cell::sync::OnceCell;
|
|
|
|
|
|
|
|
use super::cache_db::CacheDB;
|
|
|
|
use super::cache_db::CacheDBConfiguration;
|
|
|
|
use super::check::TYPE_CHECK_CACHE_DB;
|
|
|
|
use super::incremental::INCREMENTAL_CACHE_DB;
|
|
|
|
use super::node::NODE_ANALYSIS_CACHE_DB;
|
|
|
|
use super::parsed_source::PARSED_SOURCE_CACHE_DB;
|
|
|
|
use super::DenoDir;
|
|
|
|
|
2023-04-14 16:22:33 -04:00
|
|
|
#[derive(Default)]
|
feat(core): initialize SQLite off-main-thread (#18401)
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.
The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:
1. Retry twice to open on-disk caches
2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).
The caches all use the same general code now, and share the cache
failure recovery policy.
In addition, it cleans up a TODO in the `NodeAnalysisCache`.
2023-03-27 18:01:52 -04:00
|
|
|
pub struct Caches {
|
2023-04-14 16:22:33 -04:00
|
|
|
fmt_incremental_cache_db: OnceCell<CacheDB>,
|
|
|
|
lint_incremental_cache_db: OnceCell<CacheDB>,
|
|
|
|
dep_analysis_db: OnceCell<CacheDB>,
|
|
|
|
node_analysis_db: OnceCell<CacheDB>,
|
|
|
|
type_checking_cache_db: OnceCell<CacheDB>,
|
feat(core): initialize SQLite off-main-thread (#18401)
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.
The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:
1. Retry twice to open on-disk caches
2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).
The caches all use the same general code now, and share the cache
failure recovery policy.
In addition, it cleans up a TODO in the `NodeAnalysisCache`.
2023-03-27 18:01:52 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
impl Caches {
|
|
|
|
fn make_db(
|
2023-04-14 16:22:33 -04:00
|
|
|
cell: &OnceCell<CacheDB>,
|
feat(core): initialize SQLite off-main-thread (#18401)
This gets SQLite off the flamegraph and reduces initialization time by
somewhere between 0.2ms and 0.5ms. In addition, I took the opportunity
to move all the cache management code to a single place and reduce
duplication. While the PR has a net gain of lines, much of that is just
being a bit more deliberate with how we're recovering from errors.
The existing caches had various policies for dealing with cache
corruption, so I've unified them and tried to isolate the decisions we
make for recovery in a single place (see `open_connection` in
`CacheDB`). The policy I chose was:
1. Retry twice to open on-disk caches
2. If that fails, try to delete the file and recreate it on-disk
3. If we fail to delete the file or re-create a new cache, use a
fallback strategy that can be chosen per-cache: InMemory (temporary
cache for the process run), BlackHole (ignore writes, return empty
reads), or Error (fail on every operation).
The caches all use the same general code now, and share the cache
failure recovery policy.
In addition, it cleans up a TODO in the `NodeAnalysisCache`.
2023-03-27 18:01:52 -04:00
|
|
|
config: &'static CacheDBConfiguration,
|
|
|
|
path: PathBuf,
|
|
|
|
) -> CacheDB {
|
|
|
|
cell
|
|
|
|
.get_or_init(|| CacheDB::from_path(config, path, crate::version::deno()))
|
|
|
|
.clone()
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn fmt_incremental_cache_db(&self, dir: &DenoDir) -> CacheDB {
|
|
|
|
Self::make_db(
|
|
|
|
&self.fmt_incremental_cache_db,
|
|
|
|
&INCREMENTAL_CACHE_DB,
|
|
|
|
dir.fmt_incremental_cache_db_file_path(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn lint_incremental_cache_db(&self, dir: &DenoDir) -> CacheDB {
|
|
|
|
Self::make_db(
|
|
|
|
&self.lint_incremental_cache_db,
|
|
|
|
&INCREMENTAL_CACHE_DB,
|
|
|
|
dir.lint_incremental_cache_db_file_path(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn dep_analysis_db(&self, dir: &DenoDir) -> CacheDB {
|
|
|
|
Self::make_db(
|
|
|
|
&self.dep_analysis_db,
|
|
|
|
&PARSED_SOURCE_CACHE_DB,
|
|
|
|
dir.dep_analysis_db_file_path(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn node_analysis_db(&self, dir: &DenoDir) -> CacheDB {
|
|
|
|
Self::make_db(
|
|
|
|
&self.node_analysis_db,
|
|
|
|
&NODE_ANALYSIS_CACHE_DB,
|
|
|
|
dir.node_analysis_db_file_path(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
|
|
|
|
pub fn type_checking_cache_db(&self, dir: &DenoDir) -> CacheDB {
|
|
|
|
Self::make_db(
|
|
|
|
&self.type_checking_cache_db,
|
|
|
|
&TYPE_CHECK_CACHE_DB,
|
|
|
|
dir.type_checking_cache_db_file_path(),
|
|
|
|
)
|
|
|
|
}
|
|
|
|
}
|