1. Generally we should prefer to use the `log` crate.
2. I very often accidentally commit `eprintln`s.
When we should use `println` or `eprintln`, it's not too bad to be a bit
more verbose and ignore the lint rule.
Fixes #23053.
Two small bugs here:
- the existing condition for printing out the group header was broken.
it worked in the reproducer (in the issue above) without filtering only
by accident, due to setting `self.has_ungrouped = true` once we see the
warmup bench. Knowing that we sort benchmarks to put ungrouped benches
first, there are only two cases: 1) we are starting the first group 2)
we are ending the previous group and starting a new group
- when you passed `--filter` we were applying that filter to the warmup
bench (which is not visible to users), so we suffered from jit bias if
you were filtering (unless your filter was `<warmup>`)
TLDR;
Running
```bash
deno bench main.js --filter="G"
```
```js
// main.js
Deno.bench({
group: "G1",
name: "G1-A",
fn() {},
});
Deno.bench({
group: "G1",
name: "G1-B",
fn() {},
});
```
Before this PR:
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
--------------------------------------------------------------- -----------------------------
G1-A 303.52 ps/iter3,294,726,102.1 (254.2 ps … 7.8 ns) 287.5 ps 391.7 ps 437.5 ps
G1-B 3.8 ns/iter 263,360,635.9 (2.24 ns … 8.36 ns) 3.84 ns 4.73 ns 4.94 ns
summary
G1-A
12.51x faster than G1-B
```
After this PR:
```
benchmark time (avg) iter/s (min … max) p75 p99 p995
--------------------------------------------------------------- -----------------------------
group G1
G1-A 3.85 ns/iter 259,822,096.0 (2.42 ns … 9.03 ns) 3.83 ns 4.62 ns 4.83 ns
G1-B 3.84 ns/iter 260,458,274.5 (3.55 ns … 7.05 ns) 3.83 ns 4.45 ns 4.7 ns
summary
G1-B
1x faster than G1-A
```
Disables `BenchContext::start()` and `BenchContext::end()` for low
precision benchmarks (less than 0.01s per iteration). Prints a warning
when they are used in such benchmarks, suggesting to remove them.
```ts
Deno.bench("noop", { group: "noops" }, () => {});
Deno.bench("noop with start/end", { group: "noops" }, (b) => {
b.start();
b.end();
});
```
Before:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.2 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 2.63 ns/iter 380,674,131.4 (2.45 ns … 27.78 ns) 2.55 ns 4.03 ns 5.33 ns
noop with start and end 302.47 ns/iter 3,306,146.0 (200 ns … 151.2 µs) 300 ns 400 ns 400 ns
summary
noop
115.14x faster than noop with start and end
```
After:
```
cpu: 12th Gen Intel(R) Core(TM) i9-12900K
runtime: deno 1.36.1 (x86_64-unknown-linux-gnu)
file:///home/nayeem/projects/deno/temp3.ts
benchmark time (avg) iter/s (min … max) p75 p99 p995
----------------------------------------------------------------------------- -----------------------------
noop 3.01 ns/iter 332,565,561.7 (2.73 ns … 29.54 ns) 2.93 ns 5.29 ns 7.45 ns
noop with start and end 7.73 ns/iter 129,291,091.5 (6.61 ns … 46.76 ns) 7.87 ns 13.12 ns 15.32 ns
Warning start() and end() calls in "noop with start and end" are ignored because it averages less than 0.01s per iteration. Remove them for better results.
summary
noop
2.57x faster than noop with start and end
```