3d65177dbc
Use $STD_VERSION in std/ README files to automatically display proper version. |
||
---|---|---|
.. | ||
_yaml | ||
testdata | ||
ascii85.ts | ||
ascii85_test.ts | ||
base32.ts | ||
base32_test.ts | ||
base64.ts | ||
base64_test.ts | ||
base64url.ts | ||
base64url_test.ts | ||
binary.ts | ||
binary_test.ts | ||
csv.ts | ||
csv_test.ts | ||
hex.ts | ||
hex_test.ts | ||
README.md | ||
toml.ts | ||
toml_test.ts | ||
utf8.ts | ||
yaml.ts | ||
yaml_test.ts |
encoding
Helper module for dealing with external data structures.
Binary
Implements equivalent methods to Go's encoding/binary
package.
Available Functions:
sizeof(dataType: RawTypes): number
getNBytes(r: Deno.Reader, n: number): Promise<Uint8Array>
varnum(b: Uint8Array, o: VarnumOptions = {}): number | null
varbig(b: Uint8Array, o: VarbigOptions = {}): bigint | null
putVarnum(b: Uint8Array, x: number, o: VarnumOptions = {}): number
putVarbig(b: Uint8Array, x: bigint, o: VarbigOptions = {}): number
readVarnum(r: Deno.Reader, o: VarnumOptions = {}): Promise<number>
readVarbig(r: Deno.Reader, o: VarbigOptions = {}): Promise<bigint>
writeVarnum(w: Deno.Writer, x: number, o: VarnumOptions = {}): Promise<number>
writeVarbig(w: Deno.Writer, x: bigint, o: VarbigOptions = {}): Promise<number>
CSV
API
readMatrix(reader: BufReader, opt: ReadOptions = { comma: ",", trimLeadingSpace: false, lazyQuotes: false }): Promise<string[][]>
Parse the CSV from the reader
with the options provided and return
string[][]
.
parse(input: string | BufReader, opt: ParseOptions = { skipFirstRow: false }): Promise<unknown[]>
:
Parse the CSV string/buffer with the options provided. The result of this function is as follows:
- If you don't provide
opt.skipFirstRow
,opt.parse
, andopt.columns
, it returnsstring[][]
. - If you provide
opt.skipFirstRow
oropt.columns
but notopt.parse
, it returnsobject[]
. - If you provide
opt.parse
, it returns an array where each element is the value returned fromopt.parse
.
ParseOptions
skipFirstRow: boolean;
: If you provideskipFirstRow: true
andcolumns
, the first line will be skipped. If you provideskipFirstRow: true
but notcolumns
, the first line will be skipped and used as header definitions.columns: string[] | HeaderOptions[];
: If you providestring[]
orColumnOptions[]
, those names will be used for header definition.parse?: (input: unknown) => unknown;
: Parse function for the row, which will be executed after parsing of all columns. Therefore if you don't provideskipFirstRow
,columns
, andparse
function, input will bestring[]
.
HeaderOptions
name: string;
: Name of the header to be used as property.parse?: (input: string) => unknown;
: Parse function for the column. This is executed on each entry of the header. This can be combined with the Parse function of the rows.
ReadOptions
comma?: string;
: Character which separates values. Default:','
.comment?: string;
: Character to start a comment. Default:'#'
.trimLeadingSpace?: boolean;
: Flag to trim the leading space of the value. Default:false
.lazyQuotes?: boolean;
: Allow unquoted quote in a quoted field or non double quoted quotes in quoted field. Default:false
.fieldsPerRecord?
: Enabling the check of fields for each row. If == 0, first row is used as referral for the number of fields.
Usage
import { parse } from "https://deno.land/std@$STD_VERSION/encoding/csv.ts";
const string = "a,b,c\nd,e,f";
console.log(
await parse(string, {
header: false,
}),
);
// output:
// [["a", "b", "c"], ["d", "e", "f"]]
TOML
This module parse TOML files. It follows as much as possible the TOML specs. Be sure to read the supported types as not every specs is supported at the moment and the handling in TypeScript side is a bit different.
Supported types and handling
- ✔️ Keys
- ❗ String
- ✔️ Multiline String
- ✔️ Literal String
- ❗ Integer
- ✔️ Float
- ✔️ Boolean
- ✔️ Offset Date-time
- ✔️ Local Date-time
- ✔️ Local Date
- ❗ Local Time
- ✔️ Table
- ✔️ Inline Table
- ❗ Array of Tables
❗ Supported with warnings see Warning.
⚠️ Warning
String
- Regex : Due to the spec, there is no flag to detect regex properly in a TOML declaration. So the regex is stored as string.
Integer
For Binary / Octal / Hexadecimal numbers, they are stored as string to be not interpreted as Decimal.
Local Time
Because local time does not exist in JavaScript, the local time is stored as a string.
Inline Table
Inline tables are supported. See below:
animal = { type = { name = "pug" } }
## Output { animal: { type: { name: "pug" } } }
animal = { type.name = "pug" }
## Output { animal: { type : { name : "pug" } }
animal.as.leaders = "tosin"
## Output { animal: { as: { leaders: "tosin" } } }
"tosin.abasi" = "guitarist"
## Output { tosin.abasi: "guitarist" }
Array of Tables
At the moment only simple declarations like below are supported:
[[bin]]
name = "deno"
path = "cli/main.rs"
[[bin]]
name = "deno_core"
path = "src/foo.rs"
[[nib]]
name = "node"
path = "not_found"
will output:
{
"bin": [
{ "name": "deno", "path": "cli/main.rs" },
{ "name": "deno_core", "path": "src/foo.rs" }
],
"nib": [{ "name": "node", "path": "not_found" }]
}
Basic usage
import {
parse,
stringify,
} from "https://deno.land/std@$STD_VERSION/encoding/toml.ts";
const obj = {
bin: [
{ name: "deno", path: "cli/main.rs" },
{ name: "deno_core", path: "src/foo.rs" },
],
nib: [{ name: "node", path: "not_found" }],
};
const tomlString = stringify(obj);
console.log(tomlString);
// =>
// [[bin]]
// name = "deno"
// path = "cli/main.rs"
// [[bin]]
// name = "deno_core"
// path = "src/foo.rs"
// [[nib]]
// name = "node"
// path = "not_found"
const tomlObject = parse(tomlString);
console.log(tomlObject);
// =>
// {
// bin: [
// { name: "deno", path: "cli/main.rs" },
// { name: "deno_core", path: "src/foo.rs" }
// ],
// nib: [ { name: "node", path: "not_found" } ]
// }
YAML
YAML parser / dumper for Deno.
Heavily inspired from [js-yaml].
Basic usage
parse
parses the yaml string, and stringify
dumps the given object to YAML
string.
import {
parse,
stringify,
} from "https://deno.land/std@$STD_VERSION/encoding/yaml.ts";
const data = parse(`
foo: bar
baz:
- qux
- quux
`);
console.log(data);
// => { foo: "bar", baz: [ "qux", "quux" ] }
const yaml = stringify({ foo: "bar", baz: ["qux", "quux"] });
console.log(yaml);
// =>
// foo: bar
// baz:
// - qux
// - quux
If your YAML contains multiple documents in it, you can use parseAll
for
handling it.
import { parseAll } from "https://deno.land/std@$STD_VERSION/encoding/yaml.ts";
const data = parseAll(`
---
id: 1
name: Alice
---
id: 2
name: Bob
---
id: 3
name: Eve
`);
console.log(data);
// => [ { id: 1, name: "Alice" }, { id: 2, name: "Bob" }, { id: 3, name: "Eve" } ]
API
parse(str: string, opts?: ParserOption): unknown
Parses the YAML string with a single document.
parseAll(str: string, iterator?: Function, opts?: ParserOption): unknown
Parses the YAML string with multiple documents. If the iterator is given, it's applied to every document instead of returning the array of parsed objects.
stringify(obj: object, opts?: DumpOption): string
Serializes object
as a YAML document.
⚠️ Limitations
binary
type is currently not stable.function
,regexp
, andundefined
type are currently not supported.
More example
See: https://github.com/nodeca/js-yaml
base32
RFC4648 base32 encoder/decoder for Deno.
Basic usage
encode
encodes a Uint8Array
to RFC4648 base32 representation, and decode
decodes the given RFC4648 base32 representation to a Uint8Array
.
import {
decode,
encode,
} from "https://deno.land/std@$STD_VERSION/encoding/base32.ts";
const b32Repr = "RC2E6GA=";
const binaryData = decode(b32Repr);
console.log(binaryData);
// => Uint8Array [ 136, 180, 79, 24 ]
console.log(encode(binaryData));
// => RC2E6GA=
ascii85
Ascii85/base85 encoder and decoder with support for multiple standards.
Basic usage
encode
encodes a Uint8Array
to a ascii85 representation, and decode
decodes the given ascii85 representation to a Uint8Array
.
import {
decode,
encode,
} from "https://deno.land/std@$STD_VERSION/encoding/ascii85.ts";
const a85Repr = "LpTqp";
const binaryData = decode(a85Repr);
console.log(binaryData);
// => Uint8Array [ 136, 180, 79, 24 ]
console.log(encode(binaryData));
// => LpTqp
Specifying a standard and delimeter
By default all functions are using the most popular Adobe version of ascii85 and
not adding any delimeter. However, there are three more standards supported -
btoa (different delimeter and additional compression of 4 bytes equal to 32),
Z85 and
RFC 1924. It's possible to use a
different encoding by specifying it in options
object as a second parameter.
Similarly, it's possible to make encode
add a delimeter (<~
and ~>
for
Adobe, xbtoa Begin
and xbtoa End
with newlines between the delimeters and
encoded data for btoa. Checksums for btoa are not supported. Delimeters are not
supported by other encodings.)
encoding examples:
import {
decode,
encode,
} from "https://deno.land/std@$STD_VERSION/encoding/ascii85.ts";
const binaryData = new Uint8Array([136, 180, 79, 24]);
console.log(encode(binaryData));
// => LpTqp
console.log(encode(binaryData, { standard: "Adobe", delimeter: true }));
// => <~LpTqp~>
console.log(encode(binaryData, { standard: "btoa", delimeter: true }));
/* => xbtoa Begin
LpTqp
xbtoa End */
console.log(encode(binaryData, { standard: "RFC 1924" }));
// => h_p`_
console.log(encode(binaryData, { standard: "Z85" }));
// => H{P}{