This is a proof of concept for being able to snapshot TypeScript files.
Currently only a single runtime file is authored in TypeScript -
"runtime/js/01_version.ts".
Not needed infrastructure was removed from "core/snapshot_util.rs".
---------
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
This commit changes handling of config file to enable
specifying "imports" and "scopes" objects effectively making
the configuration file an import map.
"imports" and "scopes" take precedence over "importMap" configuration,
but have lower priority than "--importmap" CLI flag.
Co-authored-by: David Sherret <dsherret@users.noreply.github.com>
Co-authored-by: David Sherret <dsherret@gmail.com>
Bump the rsa crate to 0.7.0
The API for the `rsa` crate has changed significantly, but I have
verified that tests continue to pass throughout this update.
Adds support for passing and returning structs as buffers to FFI. This does not implement fastapi support for structs. Needed for certain system APIs such as AppKit on macOS.
Previously, errored streaming response bodies did not cause the HTTP
stream to be aborted. It instead caused the stream to be closed gracefully,
which had the result that the client could not detect the difference
between a successful response and an errored response.
This commit fixes the issue by aborting the stream on error.
This PR adds the concept of a global `DrawThread`, which can receive
multiple renderers to draw information on the screen (note: the
underlying thread is released back to tokio when it's not rendering). It
also separates the concept of progress bars from the existing "draw
thread". This makes it trivial for us to do stuff like show permission
prompts and progress bars at the same time in the future.
The reason this is global is because the process' tty stderr is also a
global concept.
Supports package names that aren't all lowercase.
This stores the package with a leading underscore (since that's not
allowed in npm's registry and no package exists with a leading
underscore) then base32 encoded (A-Z0-9) so it can be lowercased and
avoid collisions.
Global cache dir:
```
$DENO_DIR/npm/registry.npmjs.org/_{base32_encode(package_name).to_lowercase()}/{version}
```
node_modules dir `.deno` folder:
```
node_modules/.deno/_{base32_encode(package_name).to_lowercase()}@{version}/node_modules/<package-name>
```
Within node_modules folder:
```
node_modules/<package-name>
```
So, direct childs of the node_modules folder can have collisions between
packages like `JSON` vs `json`, but this is already something npm itself
doesn't handle well. Plus, Deno doesn't actually ever resolve to the
`node_modules/<package-name>` folder, but just has that for
compatibility. Additionally, packages in the `.deno` dir could have
collissions if they have multiple dependencies that only differ in
casing or a dependency that has different casing, but if someone is
doing that then they're already going to have trouble with npm and they
are asking for trouble in general.
<!--
Before submitting a PR, please read http://deno.land/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
-->
* Use stack allocated array for 16 promises and spill rest to heap. the
exact number can change, maybe 128? (tokio's coop budget limit)
* Avoid v8::Global::clone for global context.
* Do not open global opresolve when its not needed.
This adds support for peer dependencies in npm packages.
1. If not found higher in the tree (ancestor and ancestor siblings),
peer dependencies are resolved like a dependency similar to npm 7.
2. Optional peer dependencies are only resolved if found higher in the
tree.
3. This creates "copy packages" or duplicates of a package when a
package has different resolution due to peer dependency resolution—see
https://pnpm.io/how-peers-are-resolved. Unlike pnpm though, duplicates
of packages will have `_1`, `_2`, etc. added to the end of the package
version in the directory in order to minimize the chance of hitting the
max file path limit on Windows. This is done for both the local
"node_modules" directory and also the global npm cache. The files are
hard linked in this case to reduce hard drive space.
This is a first pass and the code is definitely more inefficient than it
could be.
Closes #15823
The "proposed" feature that we depend upon in tower-lsp, turns on the
"proposed" feature in lsp-types which has breaking changes in patch
releases because it's explicitly unstable. We need to pin it to prevent
it breaking cargo publish.
<!--
Before submitting a PR, please read http://deno.land/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
-->
Tests and implementation are found here:
https://github.com/denoland/deno_task_shell/pull/59
This is a breaking change, but `deno task` is unstable.
> This changes async commands so that on non-zero exit code they will
fail the entire task. For example:
>
> ```jsonc
> // task that asynchronously starts a server and starts a watcher for
the frontend
> "dev": "deno task server & deno task frontend:watch"
> ```
>
> Previously when running `deno task dev`, if `deno task server` failed,
the entire command would not fail, which kept in line with `sh`, but
it's not very practical. This change causes `deno task dev` to fail.
>
> To opt out, developers can add an `|| exit 0`:
>
> ```jsonc
> "dev": "deno task server || exit 0 & deno task frontend:watch"
> ```
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
<!--
Before submitting a PR, please read http://deno.land/manual/contributing
1. Give the PR a descriptive title.
Examples of good title:
- fix(std/http): Fix race condition in server
- docs(console): Update docstrings
- feat(doc): Handle nested reexports
Examples of bad title:
- fix #7123
- update docs
- fix bugs
2. Ensure there is a related issue and it is referenced in the PR text.
3. Ensure there are tests that cover the changes.
4. Ensure `cargo test` passes.
5. Ensure `./tools/format.js` passes without changing files.
6. Ensure `./tools/lint.js` passes.
-->
Co-authored-by: denobot <33910674+denobot@users.noreply.github.com>
Co-authored-by: bartlomieju <bartlomieju@users.noreply.github.com>
Follow-up to #16208.
- Refactors build.rs behaviour to use `-exported_symbols_list` /
`--export-dynamic-symbol-list`
- Since all build systems now rely on a symbols list file, I have added
`generate_exported_symbols_list`, which derives the symbol list file
depending on the platform, which makes `tools/napi/generate_link_win.js`
redundant.
- Fixes a missed instance of `i8` being used instead of `c_char`
Co-authored-by: Divy Srivastava <dj.srivastava23@gmail.com>
This commit introduces two new buffer wrapper types to `deno_core`. The
main benefit of these new wrappers is that they can wrap a number of
different underlying buffer types. This allows for a more flexible read
and write API on resources that will require less copying of data
between different buffer representations.
- `BufView` is a read-only view onto a buffer. It can be backed by
`ZeroCopyBuf`, `Vec<u8>`, and `bytes::Bytes`.
- `BufViewMut` is a read-write view onto a buffer. It can be cheaply
converted into a `BufView`. It can be backed by `ZeroCopyBuf` or
`Vec<u8>`.
Both new buffer views have a cursor. This means that the start point of
the view can be constrained to write / read from just a slice of the
view. Only the start point of the slice can be adjusted. The end point
is fixed. To adjust the end point, the underlying buffer needs to be
truncated.
Readable resources have been changed to better cater to resources that
do not support BYOB reads. The basic `read` method now returns a
`BufView` instead of taking a `ZeroCopyBuf` to fill. This allows the
operation to return buffers that the resource has already allocated,
instead of forcing the caller to allocate the buffer. BYOB reads are
still very useful for resources that support them, so a new `read_byob`
method has been added that takes a `BufViewMut` to fill. `op_read`
attempts to use `read_byob` if the resource supports it, which falls
back to `read` and performs an additional copy if it does not. For
Rust->JS reads this change should have no impact, but for Rust->Rust
reads, this allows the caller to avoid an additional copy in many
scenarios. This combined with the support for `BufView` to be backed by
`bytes::Bytes` allows us to avoid one data copy when piping from a
`fetch` response into an `ext/http` response.
Writable resources have been changed to take a `BufView` instead of a
`ZeroCopyBuf` as an argument. This allows for less copying of data in
certain scenarios, as described above. Additionally a new
`Resource::write_all` method has been added that takes a `BufView` and
continually attempts to write the resource until the entire buffer has
been written. Certain resources like files can override this method to
provide a more efficient `write_all` implementation.
This is the release commit being forwarded back to main for 1.26.1
Please ensure:
- [x] Everything looks ok in the PR
- [x] The release has been published
To make edits to this PR:
```shell
git fetch upstream forward_v1.26.1 && git checkout -b forward_v1.26.1 upstream/forward_v1.26.1
```
Don't need this PR? Close it.
cc @cjihrig
Co-authored-by: cjihrig <cjihrig@users.noreply.github.com>
Currently, we use `-rdynamic` for exporting Node API symbols to the
symbol table. `-rdynamic` will export *all* symbols, that means
previously unused functions will not be optimized away introducing a lot
of binary bloat.
This patch uses `-exported_symbol` and `--export-dynamic-symbol` link
flags (not as universal as `-rdynamic`) to only mark Node API symbols to
be put in the dynamic symbol table.
This PR implements the NAPI for loading native modules into Deno.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
Co-authored-by: DjDeveloper <43033058+DjDeveloperr@users.noreply.github.com>
Co-authored-by: Ryan Dahl <ry@tinyclouds.org>
v0.1.3 contains code that will stop working with newer versions of
libstd because the layout of some std::net types changed.
Refs: https://github.com/bnoordhuis/netif/pull/10
- move errors related to Node compat from cli/node/errors.rs to "ext/node" crate
- remove dependency on "node_resolver" crate
- make some of structures private to the "cli/node" module
Previously `jsxImportSource` was resolved relative to the config file
during graph building, and relative to the emitted module during
runtime.
This is now fixed so that the JSX import source is resolved relative to
the module both during graph building and at runtime.
This commit adds "ext/node" extension that implementes CommonJS module system.
In the future this extension might be extended to actually contain implementation of
Node compatibility layer in favor of "deno_std/node".
Currently this functionality is not publicly exposed, it is available via "Deno[Deno.internal].require"
namespace and is meant to be used by other functionality to be landed soon.
This is a minimal first pass, things that still don't work:
support for dynamic imports in CJS
conditional exports
This commit fixes source maps for files that contain emojis.
This is done by updating "deno_ast" to "0.14.1" for the case
of "--no-check" flag (ie using SWC emit) and by overriding
TSC's default base64 encoder (which turned out to be buggy)
for the type checking case.
This commit changes "deno bench" subcommand, by updating
the "Deno.bench" API as follows:
- remove "Deno.BenchDefinition.n"
- remove "Deno.BenchDefintion.warmup"
- add "Deno.BenchDefinition.group"
- add "Deno.BenchDefintion.baseline"
This is done because bench cases are no longer run fixed amount
of iterations, but instead they are run until there is difference between
subsequent runs that is statistically insiginificant.
Additionally, console reporter was rewritten completely, to looks
similar to "hyperfine" reporter.
The following transformations gradually faced by "JsError" have all been
moved up front to "JsError::from_v8_exception()":
- finding the first non-"deno:" source line;
- moving "JsError::script_resource_name" etc. into the first error stack
in case of syntax errors;
- source mapping "JsError::script_resource_name" etc. when wrapping
the error even though the frame locations are source mapped earlier;
- removing "JsError::{script_resource_name,line_number,start_column,end_column}"
entirely in favour of "js_error.frames.get(0)".
We also no longer pass a js-side callback to "core/02_error.js" from cli.
I avoided doing this on previous occasions because the source map lookups
were in an awkward place.
This commit adds "deno bench" subcommand and "Deno.bench()"
API that allows to register bench cases.
The API is modelled after "Deno.test()" and "deno test" subcommand.
Currently the output is rudimentary and bench cases and not
subject to "ops" and "resource" sanitizers.
Co-authored-by: evan <github@evan.lol>
This commit adds CJS/ESM interoperability when running in --compat mode.
Before executing files, they are analyzed and all CommonJS modules are
transformed on the fly to a ES modules. This is done by utilizing analyze_cjs()
functionality from deno_ast. After discovering exports and reexports, an ES
module is rendered and saved in memory for later use.
There's a caveat that all files ending with ".js" extension are considered as
CommonJS modules (unless there's a related "package.json" with "type": "module").
Covered ranges were not merged and thus it appeared that some lines
might be uncovered. To fix this I used "v8-coverage" that takes care
of merging the ranges properly. With this change, coverage collected
from a file by multiple entrypoints is now correctly calculated.
I ended up forking https://github.com/demurgos/v8-coverage and adding
"cli/tools/coverage/merge.rs" and "cli/tools/coverage/range_tree.rs".