`performance.timeOrigin` was being set from when JS started executing,
but `op_now` measures from an `std::time::Instant` stored in `OpState`,
which is created at a completely different time. This caused
`performance.timeOrigin` to be very incorrect. This PR corrects the
origin and also cleans up some of the timer code.
Compared to `Date.now()`, `performance`'s time origin is now
consistently within 5us (0.005ms) of system time.
![image](https://github.com/user-attachments/assets/0a7be04a-4f6d-4816-bd25-38a2e6136926)
* cts support
* better cjs/cts type checking
* deno compile cjs/cts support
* More efficient detect cjs (going towards stabilization)
* Determination of whether .js, .ts, .jsx, or .tsx is cjs or esm is only
done after loading
* Support `import x = require(...);`
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Fixes #26179.
The original error reported in that issue is fixed on canary, but in
local testing on my windows machine, `next build` would just hang
forever.
After some digging, what happens is that at some point in next build,
readFile promises (from `fs/promises` ) just never resolve, and so next
hangs.
It turns out the issue is saturating tokio's blocking task thread pool.
We previously limited the number of blocking threads to 32, and at some
point those threads are all in use and there's no thread available for
the file reads.
What's taking up all of those threads? The answer turns out to be
`tokio::process`. On windows, child process stdio uses the blocking
threadpool: https://github.com/tokio-rs/tokio/pull/4824. When you poll
the child's stdio on windows, it spawns a blocking task per poll, and
calls `std::io::Read::read` in the blocking context. That call can block
until data is available.
Putting it all together, what happens is that Next.js spawns `2 * the
number of CPU cores` deno child subprocesses to do work. We implement
`child_process` with `tokio::process`. When the child processes' stdio
get polled, blocking tasks get spawned, and those blocking tasks might
block until data is available. So if you have 16 cores (as I do), there
are going to be potentially >32 blocking task threadpool threads taken
just by the child processes. That leaves no room for other tasks to make
progress
---
To fix this, for now, increase the size of the blocking threadpool on
windows. 4 * the number of CPU cores should be enough to leave room for
other tasks to make progress.
Longer term, this can be fixed more properly when we handroll our own
subprocess code (needed for detached processes and additional pipes on
windows).
This is the release commit being forwarded back to main for 2.0.2
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
when defining a custom runtime, it might be useful to define a custom
prompter - for instance when you are not relying on the terminal and
want a GUI prompter instead
This is the release commit being forwarded back to main for 2.0.1
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This came up on Discord as a question so I thought it's worth adding a
hint for this as it might be a common pitfall.
---------
Signed-off-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: David Sherret <dsherret@users.noreply.github.com>