We never properly added support for this. This fixes vendoring when it
has npm or node specifiers. Vendoring occurs by adding a
`"nodeModulesDir": true` property to deno.json then it uses a local
node_modules directory. This can be opted out by setting
`"nodeModulesDir": false` or running with `--node-modules-dir=false`.
Closes #18090
Closes #17210
Closes #17619
Closes #16778
This reverts commit 798c1ad0f1.
Reverting because this change caused a spike in memory usage, but we
can't fully realise gains from lower GC pressure from more optimal
malloc/ free provided by "jemalloc".
We might revisit the topic in future.
This commit changes the return type of an unstable `Deno.serve()` API
to instead return a `Deno.Server` object that has a `finished` field.
This change is done in preparation to be able to ref/unref the HTTP
server.
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.33.4
Please ensure:
- [x] Everything looks ok in the PR
- [ ] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.33.4 && git checkout -b forward_v1.33.4 upstream/forward_v1.33.4
```
Don't need this PR? Close it.
cc @levex
Co-authored-by: levex <levex@users.noreply.github.com>
Co-authored-by: Levente Kurusa <lkurusa@kernelstuff.org>
This is a bit bare bones but gets `npm-run-all` working. For full stdio
compatibility with node more work is needed which is probably better
done in follow up PRs.
Fixes #19159
Doesn't make the API bullet-proof and there are some TODOs left,
but greatly improves the situation. Tests were ported from Node.js.
Closes https://github.com/denoland/deno/issues/18276.
Note: If the package information has already been cached, then this
requires running with `--reload` or for the registry information to be
fetched some other way (ex. the cache busting).
Closes #15544
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This commit reimplements most of "node:http" client APIs using
"ext/fetch".
There is some duplicated code and two removed Node compat tests that
will be fixed in follow up PRs.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Fixes for various `Attemped to access invalid request` bugs (#19058,
#15427, #17213).
We did not wait for both a drop event and a completion event before
removing items from the slab table. This ensures that we do so.
In addition, the slab methods are refactored out into `slab.rs` for
maintainability.
Currently the `multipart/form-data` parser in
`Request.prototype.formData` and `Response.prototype.formData` decodes
non-ASCII filenames incorrectly, as if they were encoded in Latin-1
rather than UTF-8. This happens because the header section of each
`multipart/form-data` entry is decoded as Latin-1 in order to be parsed
with `Headers`, which only allows `ByteString`s, but the names and
filenames are never decoded correctly. This PR fixes this as a
post-processing step.
Note that the `multipart/form-data` parsing for this APIs in the Fetch
spec is very much underspecified, and it does not specify that names and
filenames must be decoded as UTF-8. However, it does require that the
bodies of non-`File` entries are decoded as UTF-8, and in browsers,
names and filenames always use the same encoding as the body.
Closes #19142.
Partially supersedes #19016.
This migrates `spawn` and `spawn_blocking` to `deno_core`, and removes
the requirement for `spawn` tasks to be `Send` given our single-threaded
executor.
While we don't need to technically do anything w/`spawn_blocking`, this
allows us to have a single `JoinHandle` type that works for both cases,
and allows us to more easily experiment with alternative
`spawn_blocking` implementations that do not require tokio (ie: rayon).
Async ops (+~35%):
Before:
```
time 1310 ms rate 763358
time 1267 ms rate 789265
time 1259 ms rate 794281
time 1266 ms rate 789889
```
After:
```
time 956 ms rate 1046025
time 954 ms rate 1048218
time 924 ms rate 1082251
time 920 ms rate 1086956
```
HTTP serve (+~4.4%):
Before:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 68.78us 19.77us 1.43ms 86.84%
Req/Sec 68.78k 5.00k 73.84k 91.58%
1381833 requests in 10.10s, 167.36MB read
Requests/sec: 136823.29
Transfer/sec: 16.57MB
```
After:
```
Running 10s test @ http://localhost:4500
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 63.12us 17.43us 1.11ms 85.13%
Req/Sec 71.82k 3.71k 77.02k 79.21%
1443195 requests in 10.10s, 174.79MB read
Requests/sec: 142921.99
Transfer/sec: 17.31MB
```
Suggested-By: alice@ryhl.io
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
**THIS PR HAS GIT CONFLICTS THAT MUST BE RESOLVED**
This is the release commit being forwarded back to main for 1.33.3
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.33.3 && git checkout -b forward_v1.33.3 upstream/forward_v1.33.3
```
Don't need this PR? Close it.
cc @levex
Co-authored-by: Levente Kurusa <lkurusa@kernelstuff.org>
Adds a `deno.preloadLimit` option (ex. `"deno.preloadLimit": 2000`)
which specifies how many file entries to traverse on the file system
when the lsp loads or its configuration changes.
Closes #18955
This PR ensures that node's `worker_threads` module exports
`MessageChannel`, `MessagePort` and the `BroadcastChannel` API. Fixing
these won't make `esbuild` work, but brings us one step closer 🎉
Fixes #19028 .
This is the initial support for npm and node specifiers in `deno
compile`. The npm packages are included in the binary and read from it via
a virtual file system. This also supports the `--node-modules-dir` flag,
dependencies specified in a package.json, and npm binary commands (ex.
`deno compile --unstable npm:cowsay`)
Closes #16632
`Content-Encoding: gzip` support for `Deno.serve`. This doesn't support
Brotli (`br`) yet, however it should not be difficult to add. Heuristics
for compression are modelled after those in `Deno.serveHttp`.
Tests are provided to ensure that the gzip compression is correct. We
chunk a number of different streams (zeros, hard-to-compress data,
already-gzipped data) in a number of different ways (regular, random,
large/small, small/large).