8a6a2a50f7
includes: - http/file_server.ts - testing/_diff.ts - testing/asserts.ts Relates to #7487 |
||
---|---|---|
.. | ||
_diff.ts | ||
_diff_test.ts | ||
asserts.ts | ||
asserts_test.ts | ||
bench.ts | ||
bench_example.ts | ||
bench_test.ts | ||
README.md |
Testing
This module provides a few basic utilities to make testing easier and consistent in Deno.
Usage
testing/asserts.ts
module provides range of assertion helpers. If the
assertion is false an AssertionError
will be thrown which will result in
pretty-printed diff of failing assertion.
equal()
- Deep comparison function, whereactual
andexpected
are compared deeply, and if they vary,equal
returnsfalse
.assert()
- Expects a boolean value, throws if the value isfalse
.assertEquals()
- Uses theequal
comparison and throws if theactual
andexpected
are not equal.assertNotEquals()
- Uses theequal
comparison and throws if theactual
andexpected
are equal.assertStrictEquals()
- Comparesactual
andexpected
strictly, therefore for non-primitives the values must reference the same instance.assertStringIncludes()
- Make an assertion thatactual
includesexpected
.assertMatch()
- Make an assertion thatactual
match RegExpexpected
.assertNotMatch()
- Make an assertion thatactual
not match RegExpexpected
.assertArrayIncludes()
- Make an assertion thatactual
array includes theexpected
values.assertObjectMatch()
- Make an assertion thatactual
object matchexpected
subset objectassertThrows()
- Expects the passedfn
to throw. Iffn
does not throw, this function does. Also compares any errors thrown to an optional expectedError
class and checks that the error.message
includes an optional string.assertThrowsAsync()
- Expects the passedfn
to be async and throw (or return aPromise
that rejects). If thefn
does not throw or reject, this function will throw asynchronously. Also compares any errors thrown to an optional expectedError
class and checks that the error.message
includes an optional string.unimplemented()
- Use this to stub out methods that will throw when invoked.unreachable()
- Used to assert unreachable code.
Basic usage:
import { assertEquals } from "https://deno.land/std@$STD_VERSION/testing/asserts.ts";
Deno.test({
name: "testing example",
fn(): void {
assertEquals("world", "world");
assertEquals({ hello: "world" }, { hello: "world" });
},
});
Short syntax (named function instead of object):
Deno.test("example", function (): void {
assertEquals("world", "world");
assertEquals({ hello: "world" }, { hello: "world" });
});
Using assertStrictEquals()
:
Deno.test("isStrictlyEqual", function (): void {
const a = {};
const b = a;
assertStrictEquals(a, b);
});
// This test fails
Deno.test("isNotStrictlyEqual", function (): void {
const a = {};
const b = {};
assertStrictEquals(a, b);
});
Using assertThrows()
:
Deno.test("doesThrow", function (): void {
assertThrows((): void => {
throw new TypeError("hello world!");
});
assertThrows((): void => {
throw new TypeError("hello world!");
}, TypeError);
assertThrows(
(): void => {
throw new TypeError("hello world!");
},
TypeError,
"hello",
);
});
// This test will not pass.
Deno.test("fails", function (): void {
assertThrows((): void => {
console.log("Hello world");
});
});
Using assertThrowsAsync()
:
Deno.test("doesThrow", async function (): Promise<void> {
await assertThrowsAsync(
async (): Promise<void> => {
throw new TypeError("hello world!");
},
);
await assertThrowsAsync(async (): Promise<void> => {
throw new TypeError("hello world!");
}, TypeError);
await assertThrowsAsync(
async (): Promise<void> => {
throw new TypeError("hello world!");
},
TypeError,
"hello",
);
await assertThrowsAsync(
async (): Promise<void> => {
return Promise.reject(new Error());
},
);
});
// This test will not pass.
Deno.test("fails", async function (): Promise<void> {
await assertThrowsAsync(
async (): Promise<void> => {
console.log("Hello world");
},
);
});
Benching
With this module you can benchmark your code and get information on how is it performing.
Basic usage:
Benchmarks can be registered using the bench
function, where you can define a
code, that should be benchmarked. b.start()
has to be called at the start of
the part you want to benchmark and b.stop()
at the end of it, otherwise an
error will be thrown.
After that simply calling runBenchmarks()
will benchmark all registered
benchmarks and log the results in the commandline.
import {
bench,
runBenchmarks,
} from "https://deno.land/std@$STD_VERSION/testing/bench.ts";
bench(function forIncrementX1e9(b): void {
b.start();
for (let i = 0; i < 1e9; i++);
b.stop();
});
runBenchmarks();
Averaging execution time over multiple runs:
bench({
name: "runs100ForIncrementX1e6",
runs: 100,
func(b): void {
b.start();
for (let i = 0; i < 1e6; i++);
b.stop();
},
});
Running specific benchmarks using regular expressions:
runBenchmarks({ only: /desired/, skip: /exceptions/ });
Processing benchmark results
runBenchmarks()
returns a Promise<BenchmarkRunResult>
, so you can process
the benchmarking results yourself. It contains detailed results of each
benchmark's run as BenchmarkResult
s.
runBenchmarks()
.then((results: BenchmarkRunResult) => {
console.log(results);
})
.catch((error: Error) => {
// ... errors if benchmark was badly constructed.
});
Processing benchmarking progress
runBenchmarks()
accepts an optional progress handler callback function, so you
can get information on the progress of the running benchmarking.
Using { silent: true }
means you wont see the default progression logs in the
commandline.
runBenchmarks({ silent: true }, (p: BenchmarkRunProgress) => {
// initial progress data.
if (p.state === ProgressState.BenchmarkingStart) {
console.log(
`Starting benchmarking. Queued: ${p.queued.length}, filtered: ${p.filtered}`,
);
}
// ...
});
Benching API
bench(benchmark: BenchmarkDefinition | BenchmarkFunction): void
Registers a benchmark that will be run once runBenchmarks
is called.
runBenchmarks(opts?: BenchmarkRunOptions, progressCb?: (p: BenchmarkRunProgress) => void | Promise<void>): Promise<BenchmarkRunResult>
Runs all registered benchmarks serially. Filtering can be applied by setting
BenchmarkRunOptions.only
and/or BenchmarkRunOptions.skip
to regular
expressions matching benchmark names. Default progression logs can be turned off
with the BenchmarkRunOptions.silent
flag.
clearBenchmarks(opts?: BenchmarkClearOptions): void
Clears all registered benchmarks, so calling runBenchmarks()
after it wont run
them. Filtering can be applied by setting BenchmarkRunOptions.only
and/or
BenchmarkRunOptions.skip
to regular expressions matching benchmark names.