This commit splits `Deno.upgradeHttp` into two different APIs, because
the same API is currently overloaded with two different functions. Flash
requests upgrade immediately, with no need to return a `Response`
object. Instead you have to manually write the response to the socket.
Hyper requests only upgrade once a `Response` object has been sent.
These two behaviours are now split into `Deno.upgradeHttp` and
`Deno.upgradeHttpRaw`. The latter is flash only. The former only
supports hyper requests at the moment, but can be updated to support
flash in the future.
Additionally this removes `void | Promise<void>` as valid return types
for the handler function. If one wants to use `Deno.upgradeHttpRaw`,
they will have to type cast the handler signature - the signature is
meant for the 99.99%, and should not be complicated for the 0.01% that
use `Deno.upgradeHttpRaw()`.
This commit changes the `Deno.serve` function signature to be more
versatile and easier to use. It is now a drop in replacement for
std/http's `serve`.
The input validation has also been reworked.
- Merge "Deno.serve()" and "Deno.serveTls()" API
- Remove first argument and use "fetch" field options instead
- Update type declarations
- Add more documentation
Welcome to better optimised op calls! Currently opSync is called with parameters of every type and count. This most definitely makes the call megamorphic. Additionally, it seems that spread params leads to V8 not being able to optimise the calls quite as well (apparently Fast Calls cannot be used with spread params).
Monomorphising op calls should lead to some improved performance. Now that unwrapping of sync ops results is done on Rust side, this is pretty simple:
```
opSync("op_foo", param1, param2);
// -> turns to
ops.op_foo(param1, param2);
```
This means sync op calls are now just directly calling the native binding function. When V8 Fast API Calls are enabled, this will enable those to be called on the optimised path.
Monomorphising async ops likely requires using callbacks and is left as an exercise to the reader.
`Deno.core.*` is unstable and not fit for public consumption, although this is a somewhat internal bench some people may use it as reference code and start using `Deno.core.encode()` in their own code
denort is an optimization to "deno compile" to produce slightly smaller
output. It's a decent idea, but causes a lot of negative side-effects:
- Deno's link time is a source of constant agony both locally and in CI,
denort doubles link time.
- The release process is a long and arduous undertaking with many manual
steps. denort necessitates an additional manual zip + upload from M1
apple computers.
- The "deno compile" interface is complicated with the "--lite" option.
This is confusing for uses ("why wouldn't you want lite?").
The benefits of this feature do not outweigh the negatives. We must find
a different approach to optimizing "deno compile" output.
This commit rewrites "JsRuntime::poll" function to fix a corner case that
might caused "overflown_response" to be overwritten by other overflown response.
The logic has been changed to allow returning multiple overflown response
alongside responses from shared queue.
This removes the std folder from the tree.
Various parts of the tests are pretty tightly dependent
on std (47 direct imports and 75 indirect imports, not
counting the cli tests that use them as fixtures) so I've
added std as a submodule for now.
This commit adds new binary target called "denort".
It is a "lite" version of "deno" binary that can only execute
code embedded inside the binary itself.
Co-authored-by: Bartek Iwańczuk <biwanczuk@gmail.com>
All benchmarks are done in Rust and can be invoked with
`cargo bench`.
Currently this has it's own "harness" that behaves like
`./tools/benchmark.py` did.
Because of this tests inside `cli/bench` are currently not run.
This should be switched to the language provided harness
once the `#[bench]` attribute has been stabilized.