It's unnecessary indirection and is preventing the ability to easily
pass isolate references into the dispatch and dyn_import closures.
Note: this changes how StartupData::Script is executed. It's no longer done
during Isolate::new() but rather lazily on first poll or execution.
This patch makes it so that RecursiveLoad doesn't own the Isolate, so
Worker::execute_mod_async does not consume itself.
Previously Worker implemented Loader, but now ThreadSafeState does.
This is necessary preparation work for dynamic import (#1789) and import
maps (#1921)
* Compiler no longer has its own Tokio runtime. Compiler handles one
message and then exits.
* Uses the simpler ts.CompilerHost interface instead of
ts.LanguageServiceHost.
* avoids recompiling the same module by introducing a hacky but simple
`hashset<string>` that stores the module names that have been already
compiled.
* Removes the CompilerConfig op.
* Removes a lot of the mocking stuff in compiler.ts like `this._ts`. It
is not useful as we don't even have tests.
* Turns off checkJs because it causes fmt_test to die with OOM.
op_fetch_module_meta_data is an op that is used by the TypeScript
compiler. TypeScript requires this op to be sync. However the
implementation of the op does things on the event loop (like fetching
HTTP resources).
In certain situations this can lead to deadlocks. The runtime's thread
pool can be filled with ops waiting on the result of
op_fetch_module_meta_data. The runtime has a maximum number of
threads it can use (the number of logical CPUs on the system).
This patch changes tokio_util::block_on to launch a new Tokio runtime
for evaluating the future, thus bipassing the max-thread problem.
This is only an issue in op_fetch_module_meta_data. Other synchronous
ops are truly synchornous, not interacting with the event loop. TODO
comments are added to direct future development.
Removed `extmap` and added .mjs entry in `map_file_extension`.
The assert in the compiler does not need to be updated, since it is
resolving from the compiled cache instead of elsewhere (notice the .map
is asserted next to it)
* In order to prevent ArrayBuffers from getting garbage collected by V8,
we used to store a v8::Persistent<ArrayBuffer> in a map. This patch
introduces a custom ArrayBuffer allocator which doesn't use Persistent
handles, but instead stores a pointer to the actual ArrayBuffer data
alongside with a reference count. Since creating Persistent handles
has quite a bit of overhead, this change significantly increases
performance. Various HTTP server benchmarks report about 5-10% more
requests per second than before.
* Previously the Persistent handle that prevented garbage collection had
to be released manually, and this wasn't always done, which was
causing memory leaks. This has been resolved by introducing a new
`PinnedBuf` type in both Rust and C++ that automatically re-enables
garbage collection when it goes out of scope.
* Zero-copy buffers are now correctly wrapped in an Option if there is a
possibility that they're not present. This clears up a correctness
issue where we were creating zero-length slices from a null pointer,
which is against the rules.
Op dispatch is now dynamically dispatched, so slightly less efficient.
The immeasurable perf hit is a reasonable trade for the API simplicity
that is gained here.