By default, `deno serve` will assign port 8000 (like `Deno.serve`).
Users may choose a different port using `--port`.
`deno serve /tmp/file.ts`
`server.ts`:
```ts
export default {
fetch(req) {
return new Response("hello world!\n");
},
};
```
When the response has been successfully send, we abort the
`Request.signal` property to indicate that all resources associated with
this transaction may be torn down.
Most common argument to `env` option for `worker_threads.Worker` will be
`process.env`.
In Deno `process.env` is a `Proxy` which can't be cloned using
structured clone algorithm.
So to be safe, I'm creating a copy of actual object before it's sent to
the worker thread.
Ref #23522
This commit adds a "private npm registry" to the test server. This
registry requires to send an appropriate Authorization header.
Towards https://github.com/denoland/deno/issues/16105
This commit changes the workspace support to provide all workspace
members to be available as imports based on their names and versions.
Closes https://github.com/denoland/deno/issues/23343
<!--
Before submitting a PR, please read
https://docs.deno.com/runtime/manual/references/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
7. Open as a draft PR if your work is still in progress. The CI won't
run
all steps, but you can add '[ci]' to a commit message to force it to.
8. If you would like to run the benchmarks on the CI, add the 'ci-bench'
label.
-->
This PR wires up a new `jsxPrecompileSkipElements` option in
`compilerOptions` that can be used to exempt a list of elements from
being precompiled with the `precompile` JSX transform.
The actual handling of `$projectChanged` is quick, but JS requests are
not. The cleared caches only get repopulated on the next actual request,
so just batch the change notification in with the next actual request.
No significant difference in benchmarks on my machine, but this speeds
up `did_change` handling and reduces our total number of JS requests (in
addition to coalescing multiple JS change notifs into one).
Embedders may have special requirements around file opening, so we add a
new `check_open` permission check that is called as part of the file
open process.
Adds an `addr` field to `HttpServer` to simplify the pattern
`Deno.serve({ onListen({ port } => listenPort = port })`. This becomes:
`const server = Deno.serve({}); port = server.addr.port`.
Changes:
- Refactors `serve` overloads to split TLS out (in preparation for
landing a place for the TLS SNI information)
- Adds an `addr` field to `HttpServer` that matches the `addr` field of
the corresponding `Deno.Listener`s.
It's not clear to me how these tests worked correctly on CI,
but they were failing hard locally because of two problems:
- missing env var that tests URL for fake npm registry
- trying to run a directory that contains native Node.js tests that
require a special harness
Landing work from #21903, plus fixing a node compat bug.
We were always sending the HTTP/2 ALPN on TLS connections which might
confuse upstream servers.
Changes:
- Configure HTTP/2 ALPN when making the TLS connection from the HTTP/2
code
- Read the `ALPNProtocols` property from the TLS connection options
rather than the deno `alpnProtocols` field
- Add tests
Prereq for landing Deno.serveHttp on Deno.serve: removing older HTTP
servers from the codebase.
This PR enables V8 code cache for ES modules and for `require` scripts
through `op_eval_context`. Code cache artifacts are transparently stored
and fetched using sqlite db and are passed to V8. `--no-code-cache` can
be used to disable.
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
It's best that this only gets merged with the latest version of the
suite, so there's little difference between the `ci` and `wpt_epoch`
workflows. This should make troubleshooting easier.
This allows people to use imports like:
```ts
import "./app.css";
```
...with `deno check` in systems where there's a bundle step (ex. Vite).
This will still error when using it with `deno run` or if the referenced
file does not exist.
See test cases for behaviour.
This PR adds a benchmark intended to measure how the LSP handles larger
repos, as well as its performance on a more realistic workload.
The repo being benchmarked is
[deco-cx/apps](https://github.com/deco-cx/apps) which has been vendored
along with its dependencies. It's included as a git submodule as its
fairly large. The LSP requests used in the benchmark are the actual
requests sent by VSCode as I opened, modified, and navigated around a
file (to simulate an actual user interaction).
The main motivation is to have a more realistic benchmark that measures
how we do with a large number of files and dependencies. The
improvements made from 1.42 to 1.42.3 mostly improved performance with
larger repos, so none of our existing benchmarks showed an improvement.
Here are the results for the changes made from 1.42 to 1.42.3 (the new
benchmark is the last one listed):
**1.42.0**
```test
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 379ms)
- Big Document/Several Edits
(5 runs, mean: 1142ms)
- Find/Replace
(10 runs, mean: 51ms)
- Code Lens
(10 runs, mean: 443ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 25121ms)
<- End benchmarking lsp
```
**1.42.3**
```text
Starting Deno benchmark
-> Start benchmarking lsp
- Simple Startup/Shutdown
(10 runs, mean: 383ms)
- Big Document/Several Edits
(5 runs, mean: 1135ms)
- Find/Replace
(10 runs, mean: 55ms)
- Code Lens
(10 runs, mean: 440ms)
- deco-cx/apps Multiple Edits + Navigation
(5 runs, mean: 11675ms)
<- End benchmarking lsp
```
Closes https://github.com/denoland/deno/issues/23362
Previously we were panicking if there was a pending read on a
port and `receiveMessageOnPort` was called. This is now fixed
by cancelling the pending read, trying to read a message and
resuming reading in a loop.