Initial commit: notification-elements-demo app

Interactive Angular 19 demo for @sda/notification-elements-ui with
6 sections: Bell & Feed, Notification Center, Inbox, Comments &
Threads, Mention Input, and Full-Featured layout. Includes mock
data, dark mode toggle, and real-time event log.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Giuliano Silvestro
2026-02-13 21:49:19 +10:00
commit 5d0c9ec7eb
36473 changed files with 3778146 additions and 0 deletions

21
node_modules/@discoveryjs/json-ext/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2020-2024 Roman Dvornov <rdvornov@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

290
node_modules/@discoveryjs/json-ext/README.md generated vendored Normal file
View File

@@ -0,0 +1,290 @@
# json-ext
[![NPM version](https://img.shields.io/npm/v/@discoveryjs/json-ext.svg)](https://www.npmjs.com/package/@discoveryjs/json-ext)
[![Build Status](https://github.com/discoveryjs/json-ext/actions/workflows/ci.yml/badge.svg)](https://github.com/discoveryjs/json-ext/actions/workflows/ci.yml)
[![Coverage Status](https://coveralls.io/repos/github/discoveryjs/json-ext/badge.svg?branch=master)](https://coveralls.io/github/discoveryjs/json-ext)
[![NPM Downloads](https://img.shields.io/npm/dm/@discoveryjs/json-ext.svg)](https://www.npmjs.com/package/@discoveryjs/json-ext)
A set of utilities designed to extend JSON's capabilities, especially for handling large JSON data (over 100MB) efficiently:
- [parseChunked()](#parsechunked) Parses JSON incrementally; similar to [`JSON.parse()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse), but processing JSON data in chunks.
- [stringifyChunked()](#stringifychunked) Converts JavaScript objects to JSON incrementally; similar to [`JSON.stringify()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify), but returns a generator that yields JSON strings in parts.
- [stringifyInfo()](#stringifyinfo) Estimates the size of the `JSON.stringify()` result and identifies circular references without generating the JSON.
- [parseFromWebStream()](#parsefromwebstream) A helper function to parse JSON chunks directly from a Web Stream.
- [createStringifyWebStream()](#createstringifywebstream) A helper function to generate JSON data as a Web Stream.
### Key Features
- Optimized to handle large JSON data with minimal resource usage (see [benchmarks](./benchmarks/README.md))
- Works seamlessly with browsers, Node.js, Deno, and Bun
- Supports both Node.js and Web streams
- Available in both ESM and CommonJS
- TypeScript typings included
- No external dependencies
- Compact size: 9.4Kb (minified), 3.8Kb (min+gzip)
### Why json-ext?
- **Handles large JSON files**: Overcomes the limitations of V8 for strings larger than ~500MB, enabling the processing of huge JSON data.
- **Prevents main thread blocking**: Distributes parsing and stringifying over time, ensuring the main thread remains responsive during heavy JSON operations.
- **Reduces memory usage**: Traditional `JSON.parse()` and `JSON.stringify()` require loading entire data into memory, leading to high memory consumption and increased garbage collection pressure. `parseChunked()` and `stringifyChunked()` process data incrementally, optimizing memory usage.
- **Size estimation**: `stringifyInfo()` allows estimating the size of resulting JSON before generating it, enabling better decision-making for JSON generation strategies.
## Install
```bash
npm install @discoveryjs/json-ext
```
## API
### parseChunked()
Functions like [`JSON.parse()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/parse), iterating over chunks to reconstruct the result object, and returns a [Promise](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise).
> Note: `reviver` parameter is not supported yet.
```ts
function parseChunked(input: Iterable<Chunk> | AsyncIterable<Chunk>): Promise<any>;
function parseChunked(input: () => (Iterable<Chunk> | AsyncIterable<Chunk>)): Promise<any>;
type Chunk = string | Buffer | Uint8Array;
```
[Benchmark](https://github.com/discoveryjs/json-ext/tree/master/benchmarks#parse-chunked)
Usage:
```js
import { parseChunked } from '@discoveryjs/json-ext';
const data = await parseChunked(chunkEmitter);
```
Parameter `chunkEmitter` can be an iterable or async iterable that iterates over chunks, or a function returning such a value. A chunk can be a `string`, `Uint8Array`, or Node.js `Buffer`.
Examples:
- Generator:
```js
parseChunked(function*() {
yield '{ "hello":';
yield Buffer.from(' "wor'); // Node.js only
yield new TextEncoder().encode('ld" }'); // returns Uint8Array
});
```
- Async generator:
```js
parseChunked(async function*() {
for await (const chunk of someAsyncSource) {
yield chunk;
}
});
```
- Array:
```js
parseChunked(['{ "hello":', ' "world"}'])
```
- Function returning iterable:
```js
parseChunked(() => ['{ "hello":', ' "world"}'])
```
- Node.js [`Readable`](https://nodejs.org/dist/latest-v14.x/docs/api/stream.html#stream_readable_streams) stream:
```js
import fs from 'node:fs';
parseChunked(fs.createReadStream('path/to/file.json'))
```
- Web stream (e.g., using [fetch()](https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API)):
> Note: Iterability for Web streams was added later in the Web platform, not all environments support it. Consider using `parseFromWebStream()` for broader compatibility.
```js
const response = await fetch('https://example.com/data.json');
const data = await parseChunked(response.body); // body is ReadableStream
```
### stringifyChunked()
Functions like [`JSON.stringify()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify), but returns a generator yielding strings instead of a single string.
> Note: Returns `"null"` when `JSON.stringify()` returns `undefined` (since a chunk cannot be `undefined`).
```ts
function stringifyChunked(value: any, replacer?: Replacer, space?: Space): Generator<string, void, unknown>;
function stringifyChunked(value: any, options: StringifyOptions): Generator<string, void, unknown>;
type Replacer =
| ((this: any, key: string, value: any) => any)
| (string | number)[]
| null;
type Space = string | number | null;
type StringifyOptions = {
replacer?: Replacer;
space?: Space;
highWaterMark?: number;
};
```
[Benchmark](https://github.com/discoveryjs/json-ext/tree/master/benchmarks#stream-stringifying)
Usage:
- Getting an array of chunks:
```js
const chunks = [...stringifyChunked(data)];
```
- Iterating over chunks:
```js
for (const chunk of stringifyChunked(data)) {
console.log(chunk);
}
```
- Specifying the minimum size of a chunk with `highWaterMark` option:
```js
const data = [1, "hello world", 42];
console.log([...stringifyChunked(data)]); // default 16kB
// ['[1,"hello world",42]']
console.log([...stringifyChunked(data, { highWaterMark: 16 })]);
// ['[1,"hello world"', ',42]']
console.log([...stringifyChunked(data, { highWaterMark: 1 })]);
// ['[1', ',"hello world"', ',42', ']']
```
- Streaming into a stream with a `Promise` (modern Node.js):
```js
import { pipeline } from 'node:stream/promises';
import fs from 'node:fs';
await pipeline(
stringifyChunked(data),
fs.createWriteStream('path/to/file.json')
);
```
- Wrapping into a `Promise` streaming into a stream (legacy Node.js):
```js
import { Readable } from 'node:stream';
new Promise((resolve, reject) => {
Readable.from(stringifyChunked(data))
.on('error', reject)
.pipe(stream)
.on('error', reject)
.on('finish', resolve);
});
```
- Writing into a file synchronously:
> Note: Slower than `JSON.stringify()` but uses much less heap space and has no limitation on string length
```js
import fs from 'node:fs';
const fd = fs.openSync('output.json', 'w');
for (const chunk of stringifyChunked(data)) {
fs.writeFileSync(fd, chunk);
}
fs.closeSync(fd);
```
- Using with fetch (JSON streaming):
> Note: This feature has limited support in browsers, see [Streaming requests with the fetch API](https://developer.chrome.com/docs/capabilities/web-apis/fetch-streaming-requests)
> Note: `ReadableStream.from()` has limited [support in browsers](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/from_static), use [`createStringifyWebStream()`](#createstringifywebstream) instead.
```js
fetch('http://example.com', {
method: 'POST',
duplex: 'half',
body: ReadableStream.from(stringifyChunked(data))
});
```
- Wrapping into `ReadableStream`:
> Note: Use `ReadableStream.from()` or [`createStringifyWebStream()`](#createstringifywebstream) when no extra logic is needed
```js
new ReadableStream({
start() {
this.generator = stringifyChunked(data);
},
pull(controller) {
const { value, done } = this.generator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
cancel() {
this.generator = null;
}
});
```
### stringifyInfo()
```ts
export function stringifyInfo(value: any, replacer?: Replacer, space?: Space): StringifyInfoResult;
export function stringifyInfo(value: any, options?: StringifyInfoOptions): StringifyInfoResult;
type StringifyInfoOptions = {
replacer?: Replacer;
space?: Space;
continueOnCircular?: boolean;
}
type StringifyInfoResult = {
bytes: number; // size of JSON in bytes
spaceBytes: number; // size of white spaces in bytes (when space option used)
circular: object[]; // list of circular references
};
```
Functions like [`JSON.stringify()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON/stringify), but returns an object with the expected overall size of the stringify operation and a list of circular references.
Example:
```js
import { stringifyInfo } from '@discoveryjs/json-ext';
console.log(stringifyInfo({ test: true }, null, 4));
// {
// bytes: 20, // Buffer.byteLength('{\n "test": true\n}')
// spaceBytes: 7,
// circular: []
// }
```
#### Options
##### continueOnCircular
Type: `Boolean`
Default: `false`
Determines whether to continue collecting info for a value when a circular reference is found. Setting this option to `true` allows finding all circular references.
### parseFromWebStream()
A helper function to consume JSON from a Web Stream. You can use `parseChunked(stream)` instead, but `@@asyncIterator` on `ReadableStream` has limited support in browsers (see [ReadableStream](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) compatibility table).
```js
import { parseFromWebStream } from '@discoveryjs/json-ext';
const data = await parseFromWebStream(readableStream);
// equivalent to (when ReadableStream[@@asyncIterator] is supported):
// await parseChunked(readableStream);
```
### createStringifyWebStream()
A helper function to convert `stringifyChunked()` into a `ReadableStream` (Web Stream). You can use `ReadableStream.from()` instead, but this method has limited support in browsers (see [ReadableStream.from()](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/from_static) compatibility table).
```js
import { createStringifyWebStream } from '@discoveryjs/json-ext';
createStringifyWebStream({ test: true });
// equivalent to (when ReadableStream.from() is supported):
// ReadableStream.from(stringifyChunked({ test: true }))
```
## License
MIT

14
node_modules/@discoveryjs/json-ext/cjs/index.cjs generated vendored Normal file
View File

@@ -0,0 +1,14 @@
'use strict';
const parseChunked = require('./parse-chunked.cjs');
const stringifyChunked = require('./stringify-chunked.cjs');
const stringifyInfo = require('./stringify-info.cjs');
const webStreams = require('./web-streams.cjs');
exports.parseChunked = parseChunked.parseChunked;
exports.stringifyChunked = stringifyChunked.stringifyChunked;
exports.stringifyInfo = stringifyInfo.stringifyInfo;
exports.createStringifyWebStream = webStreams.createStringifyWebStream;
exports.parseFromWebStream = webStreams.parseFromWebStream;

View File

@@ -0,0 +1,355 @@
'use strict';
const utils = require('./utils.cjs');
const STACK_OBJECT = 1;
const STACK_ARRAY = 2;
const decoder = new TextDecoder();
function adjustPosition(error, parser) {
if (error.name === 'SyntaxError' && parser.jsonParseOffset) {
error.message = error.message.replace(/at position (\d+)/, (_, pos) =>
'at position ' + (Number(pos) + parser.jsonParseOffset)
);
}
return error;
}
function append(array, elements) {
// Note: Avoid to use array.push(...elements) since it may lead to
// "RangeError: Maximum call stack size exceeded" for a long arrays
const initialLength = array.length;
array.length += elements.length;
for (let i = 0; i < elements.length; i++) {
array[initialLength + i] = elements[i];
}
}
async function parseChunked(chunkEmitter) {
const iterable = typeof chunkEmitter === 'function'
? chunkEmitter()
: chunkEmitter;
if (utils.isIterable(iterable)) {
let parser = new ChunkParser();
try {
for await (const chunk of iterable) {
if (typeof chunk !== 'string' && !ArrayBuffer.isView(chunk)) {
throw new TypeError('Invalid chunk: Expected string, TypedArray or Buffer');
}
parser.push(chunk);
}
return parser.finish();
} catch (e) {
throw adjustPosition(e, parser);
}
}
throw new TypeError(
'Invalid chunk emitter: Expected an Iterable, AsyncIterable, generator, ' +
'async generator, or a function returning an Iterable or AsyncIterable'
);
}
class ChunkParser {
constructor() {
this.value = undefined;
this.valueStack = null;
this.stack = new Array(100);
this.lastFlushDepth = 0;
this.flushDepth = 0;
this.stateString = false;
this.stateStringEscape = false;
this.pendingByteSeq = null;
this.pendingChunk = null;
this.chunkOffset = 0;
this.jsonParseOffset = 0;
}
parseAndAppend(fragment, wrap) {
// Append new entries or elements
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
if (wrap) {
this.jsonParseOffset--;
fragment = '{' + fragment + '}';
}
Object.assign(this.valueStack.value, JSON.parse(fragment));
} else {
if (wrap) {
this.jsonParseOffset--;
fragment = '[' + fragment + ']';
}
append(this.valueStack.value, JSON.parse(fragment));
}
}
prepareAddition(fragment) {
const { value } = this.valueStack;
const expectComma = Array.isArray(value)
? value.length !== 0
: Object.keys(value).length !== 0;
if (expectComma) {
// Skip a comma at the beginning of fragment, otherwise it would
// fail to parse
if (fragment[0] === ',') {
this.jsonParseOffset++;
return fragment.slice(1);
}
// When value (an object or array) is not empty and a fragment
// doesn't start with a comma, a single valid fragment starting
// is a closing bracket. If it's not, a prefix is adding to fail
// parsing. Otherwise, the sequence of chunks can be successfully
// parsed, although it should not, e.g. ["[{}", "{}]"]
if (fragment[0] !== '}' && fragment[0] !== ']') {
this.jsonParseOffset -= 3;
return '[[]' + fragment;
}
}
return fragment;
}
flush(chunk, start, end) {
let fragment = chunk.slice(start, end);
// Save position correction an error in JSON.parse() if any
this.jsonParseOffset = this.chunkOffset + start;
// Prepend pending chunk if any
if (this.pendingChunk !== null) {
fragment = this.pendingChunk + fragment;
this.jsonParseOffset -= this.pendingChunk.length;
this.pendingChunk = null;
}
if (this.flushDepth === this.lastFlushDepth) {
// Depth didn't changed, so it's a root value or entry/element set
if (this.flushDepth > 0) {
this.parseAndAppend(this.prepareAddition(fragment), true);
} else {
// That's an entire value on a top level
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
}
} else if (this.flushDepth > this.lastFlushDepth) {
// Add missed closing brackets/parentheses
for (let i = this.flushDepth - 1; i >= this.lastFlushDepth; i--) {
fragment += this.stack[i] === STACK_OBJECT ? '}' : ']';
}
if (this.lastFlushDepth === 0) {
// That's a root value
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
} else {
this.parseAndAppend(this.prepareAddition(fragment), true);
}
// Move down to the depths to the last object/array, which is current now
for (let i = this.lastFlushDepth || 1; i < this.flushDepth; i++) {
let value = this.valueStack.value;
if (this.stack[i - 1] === STACK_OBJECT) {
// find last entry
let key;
// eslint-disable-next-line curly
for (key in value);
value = value[key];
} else {
// last element
value = value[value.length - 1];
}
this.valueStack = {
value,
prev: this.valueStack
};
}
} else /* this.flushDepth < this.lastFlushDepth */ {
fragment = this.prepareAddition(fragment);
// Add missed opening brackets/parentheses
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.jsonParseOffset--;
fragment = (this.stack[i] === STACK_OBJECT ? '{' : '[') + fragment;
}
this.parseAndAppend(fragment, false);
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.valueStack = this.valueStack.prev;
}
}
this.lastFlushDepth = this.flushDepth;
}
push(chunk) {
if (typeof chunk !== 'string') {
// Suppose chunk is Buffer or Uint8Array
// Prepend uncompleted byte sequence if any
if (this.pendingByteSeq !== null) {
const origRawChunk = chunk;
chunk = new Uint8Array(this.pendingByteSeq.length + origRawChunk.length);
chunk.set(this.pendingByteSeq);
chunk.set(origRawChunk, this.pendingByteSeq.length);
this.pendingByteSeq = null;
}
// In case Buffer/Uint8Array, an input is encoded in UTF8
// Seek for parts of uncompleted UTF8 symbol on the ending
// This makes sense only if we expect more chunks and last char is not multi-bytes
if (chunk[chunk.length - 1] > 127) {
for (let seqLength = 0; seqLength < chunk.length; seqLength++) {
const byte = chunk[chunk.length - 1 - seqLength];
// 10xxxxxx - 2nd, 3rd or 4th byte
// 110xxxxx first byte of 2-byte sequence
// 1110xxxx - first byte of 3-byte sequence
// 11110xxx - first byte of 4-byte sequence
if (byte >> 6 === 3) {
seqLength++;
// If the sequence is really incomplete, then preserve it
// for the future chunk and cut off it from the current chunk
if ((seqLength !== 4 && byte >> 3 === 0b11110) ||
(seqLength !== 3 && byte >> 4 === 0b1110) ||
(seqLength !== 2 && byte >> 5 === 0b110)) {
this.pendingByteSeq = chunk.slice(chunk.length - seqLength);
chunk = chunk.slice(0, -seqLength);
}
break;
}
}
}
// Convert chunk to a string, since single decode per chunk
// is much effective than decode multiple small substrings
chunk = decoder.decode(chunk);
}
const chunkLength = chunk.length;
let lastFlushPoint = 0;
let flushPoint = 0;
// Main scan loop
scan: for (let i = 0; i < chunkLength; i++) {
if (this.stateString) {
for (; i < chunkLength; i++) {
if (this.stateStringEscape) {
this.stateStringEscape = false;
} else {
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = false;
continue scan;
case 0x5C: /* \ */
this.stateStringEscape = true;
}
}
}
break;
}
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = true;
this.stateStringEscape = false;
break;
case 0x2C: /* , */
flushPoint = i;
break;
case 0x7B: /* { */
// Open an object
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_OBJECT;
break;
case 0x5B: /* [ */
// Open an array
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_ARRAY;
break;
case 0x5D: /* ] */
case 0x7D: /* } */
// Close an object or array
flushPoint = i + 1;
this.flushDepth--;
if (this.flushDepth < this.lastFlushDepth) {
this.flush(chunk, lastFlushPoint, flushPoint);
lastFlushPoint = flushPoint;
}
break;
case 0x09: /* \t */
case 0x0A: /* \n */
case 0x0D: /* \r */
case 0x20: /* space */
// Move points forward when they points on current position and it's a whitespace
if (lastFlushPoint === i) {
lastFlushPoint++;
}
if (flushPoint === i) {
flushPoint++;
}
break;
}
}
if (flushPoint > lastFlushPoint) {
this.flush(chunk, lastFlushPoint, flushPoint);
}
// Produce pendingChunk if something left
if (flushPoint < chunkLength) {
if (this.pendingChunk !== null) {
// When there is already a pending chunk then no flush happened,
// appending entire chunk to pending one
this.pendingChunk += chunk;
} else {
// Create a pending chunk, it will start with non-whitespace since
// flushPoint was moved forward away from whitespaces on scan
this.pendingChunk = chunk.slice(flushPoint, chunkLength);
}
}
this.chunkOffset += chunkLength;
}
finish() {
if (this.pendingChunk !== null) {
this.flush('', 0, 0);
this.pendingChunk = null;
}
return this.value;
}
}
exports.parseChunked = parseChunked;

View File

@@ -0,0 +1,175 @@
'use strict';
const utils = require('./utils.cjs');
function encodeString(value) {
if (/[^\x20\x21\x23-\x5B\x5D-\uD799]/.test(value)) { // [^\x20-\uD799]|[\x22\x5c]
return JSON.stringify(value);
}
return '"' + value + '"';
}
function* stringifyChunked(value, ...args) {
const { replacer, getKeys, space, ...options } = utils.normalizeStringifyOptions(...args);
const highWaterMark = Number(options.highWaterMark) || 0x4000; // 16kb by default
const keyStrings = new Map();
const stack = [];
const rootValue = { '': value };
let prevState = null;
let state = () => printEntry('', value);
let stateValue = rootValue;
let stateEmpty = true;
let stateKeys = [''];
let stateIndex = 0;
let buffer = '';
while (true) {
state();
if (buffer.length >= highWaterMark || prevState === null) {
// flush buffer
yield buffer;
buffer = '';
if (prevState === null) {
break;
}
}
}
function printObject() {
if (stateIndex === 0) {
stateKeys = getKeys(stateValue);
buffer += '{';
}
// when no keys left
if (stateIndex === stateKeys.length) {
buffer += space && !stateEmpty
? `\n${space.repeat(stack.length - 1)}}`
: '}';
popState();
return;
}
const key = stateKeys[stateIndex++];
printEntry(key, stateValue[key]);
}
function printArray() {
if (stateIndex === 0) {
buffer += '[';
}
if (stateIndex === stateValue.length) {
buffer += space && !stateEmpty
? `\n${space.repeat(stack.length - 1)}]`
: ']';
popState();
return;
}
printEntry(stateIndex, stateValue[stateIndex++]);
}
function printEntryPrelude(key) {
if (stateEmpty) {
stateEmpty = false;
} else {
buffer += ',';
}
if (space && prevState !== null) {
buffer += `\n${space.repeat(stack.length)}`;
}
if (state === printObject) {
let keyString = keyStrings.get(key);
if (keyString === undefined) {
keyStrings.set(key, keyString = encodeString(key) + (space ? ': ' : ':'));
}
buffer += keyString;
}
}
function printEntry(key, value) {
value = utils.replaceValue(stateValue, key, value, replacer);
if (value === null || typeof value !== 'object') {
// primitive
if (state !== printObject || value !== undefined) {
printEntryPrelude(key);
pushPrimitive(value);
}
} else {
// If the visited set does not change after adding a value, then it is already in the set
if (stack.includes(value)) {
throw new TypeError('Converting circular structure to JSON');
}
printEntryPrelude(key);
stack.push(value);
pushState();
state = Array.isArray(value) ? printArray : printObject;
stateValue = value;
stateEmpty = true;
stateIndex = 0;
}
}
function pushPrimitive(value) {
switch (typeof value) {
case 'string':
buffer += encodeString(value);
break;
case 'number':
buffer += Number.isFinite(value) ? String(value) : 'null';
break;
case 'boolean':
buffer += value ? 'true' : 'false';
break;
case 'undefined':
case 'object': // typeof null === 'object'
buffer += 'null';
break;
default:
throw new TypeError(`Do not know how to serialize a ${value.constructor?.name || typeof value}`);
}
}
function pushState() {
prevState = {
keys: stateKeys,
index: stateIndex,
prev: prevState
};
}
function popState() {
stack.pop();
const value = stack.length > 0 ? stack[stack.length - 1] : rootValue;
// restore state
state = Array.isArray(value) ? printArray : printObject;
stateValue = value;
stateEmpty = false;
stateKeys = prevState.keys;
stateIndex = prevState.index;
// pop state
prevState = prevState.prev;
}
}
exports.stringifyChunked = stringifyChunked;

View File

@@ -0,0 +1,250 @@
'use strict';
const utils = require('./utils.cjs');
const hasOwn = typeof Object.hasOwn === 'function'
? Object.hasOwn
: (object, key) => Object.hasOwnProperty.call(object, key);
// https://tc39.es/ecma262/#table-json-single-character-escapes
const escapableCharCodeSubstitution = { // JSON Single Character Escape Sequences
0x08: '\\b',
0x09: '\\t',
0x0a: '\\n',
0x0c: '\\f',
0x0d: '\\r',
0x22: '\\\"',
0x5c: '\\\\'
};
const charLength2048 = Uint8Array.from({ length: 2048 }, (_, code) => {
if (hasOwn(escapableCharCodeSubstitution, code)) {
return 2; // \X
}
if (code < 0x20) {
return 6; // \uXXXX
}
return code < 128 ? 1 : 2; // UTF8 bytes
});
function isLeadingSurrogate(code) {
return code >= 0xD800 && code <= 0xDBFF;
}
function isTrailingSurrogate(code) {
return code >= 0xDC00 && code <= 0xDFFF;
}
function stringLength(str) {
// Fast path to compute length when a string contains only characters encoded as single bytes
if (!/[^\x20\x21\x23-\x5B\x5D-\x7F]/.test(str)) {
return str.length + 2;
}
let len = 0;
let prevLeadingSurrogate = false;
for (let i = 0; i < str.length; i++) {
const code = str.charCodeAt(i);
if (code < 2048) {
len += charLength2048[code];
} else if (isLeadingSurrogate(code)) {
len += 6; // \uXXXX since no pair with trailing surrogate yet
prevLeadingSurrogate = true;
continue;
} else if (isTrailingSurrogate(code)) {
len = prevLeadingSurrogate
? len - 2 // surrogate pair (4 bytes), since we calculate prev leading surrogate as 6 bytes, substruct 2 bytes
: len + 6; // \uXXXX
} else {
len += 3; // code >= 2048 is 3 bytes length for UTF8
}
prevLeadingSurrogate = false;
}
return len + 2; // +2 for quotes
}
// avoid producing a string from a number
function intLength(num) {
let len = 0;
if (num < 0) {
len = 1;
num = -num;
}
if (num >= 1e9) {
len += 9;
num = (num - num % 1e9) / 1e9;
}
if (num >= 1e4) {
if (num >= 1e6) {
return len + (num >= 1e8
? 9
: num >= 1e7 ? 8 : 7
);
}
return len + (num >= 1e5 ? 6 : 5);
}
return len + (num >= 1e2
? num >= 1e3 ? 4 : 3
: num >= 10 ? 2 : 1
);
}
function primitiveLength(value) {
switch (typeof value) {
case 'string':
return stringLength(value);
case 'number':
return Number.isFinite(value)
? Number.isInteger(value)
? intLength(value)
: String(value).length
: 4 /* null */;
case 'boolean':
return value ? 4 /* true */ : 5 /* false */;
case 'undefined':
case 'object':
return 4; /* null */
default:
return 0;
}
}
function stringifyInfo(value, ...args) {
const { replacer, getKeys, ...options } = utils.normalizeStringifyOptions(...args);
const continueOnCircular = Boolean(options.continueOnCircular);
const space = options.space?.length || 0;
const keysLength = new Map();
const visited = new Map();
const circular = new Set();
const stack = [];
const root = { '': value };
let stop = false;
let bytes = 0;
let spaceBytes = 0;
let objects = 0;
walk(root, '', value);
// when value is undefined or replaced for undefined
if (bytes === 0) {
bytes += 9; // FIXME: that's the length of undefined, should we normalize behaviour to convert it to null?
}
return {
bytes: isNaN(bytes) ? Infinity : bytes + spaceBytes,
spaceBytes: space > 0 && isNaN(bytes) ? Infinity : spaceBytes,
circular: [...circular]
};
function walk(holder, key, value) {
if (stop) {
return;
}
value = utils.replaceValue(holder, key, value, replacer);
if (value === null || typeof value !== 'object') {
// primitive
if (value !== undefined || Array.isArray(holder)) {
bytes += primitiveLength(value);
}
} else {
// check for circular references
if (stack.includes(value)) {
circular.add(value);
bytes += 4; // treat as null
if (!continueOnCircular) {
stop = true;
}
return;
}
// Using 'visited' allows avoiding hang-ups in cases of highly interconnected object graphs;
// for example, a list of git commits with references to parents can lead to N^2 complexity for traversal,
// and N when 'visited' is used
if (visited.has(value)) {
bytes += visited.get(value);
return;
}
objects++;
const prevObjects = objects;
const valueBytes = bytes;
let valueLength = 0;
stack.push(value);
if (Array.isArray(value)) {
// array
valueLength = value.length;
for (let i = 0; i < valueLength; i++) {
walk(value, i, value[i]);
}
} else {
// object
let prevLength = bytes;
for (const key of getKeys(value)) {
walk(value, key, value[key]);
if (prevLength !== bytes) {
let keyLen = keysLength.get(key);
if (keyLen === undefined) {
keysLength.set(key, keyLen = stringLength(key) + 1); // "key":
}
// value is printed
bytes += keyLen;
valueLength++;
prevLength = bytes;
}
}
}
bytes += valueLength === 0
? 2 // {} or []
: 1 + valueLength; // {} or [] + commas
if (space > 0 && valueLength > 0) {
spaceBytes +=
// a space between ":" and a value for each object entry
(Array.isArray(value) ? 0 : valueLength) +
// the formula results from folding the following components:
// - for each key-value or element: ident + newline
// (1 + stack.length * space) * valueLength
// - ident (one space less) before "}" or "]" + newline
// (stack.length - 1) * space + 1
(1 + stack.length * space) * (valueLength + 1) - space;
}
stack.pop();
// add to 'visited' only objects that contain nested objects
if (prevObjects !== objects) {
visited.set(value, bytes - valueBytes);
}
}
}
}
exports.stringifyInfo = stringifyInfo;

108
node_modules/@discoveryjs/json-ext/cjs/utils.cjs generated vendored Normal file
View File

@@ -0,0 +1,108 @@
'use strict';
function isIterable(value) {
return (
typeof value === 'object' &&
value !== null &&
(
typeof value[Symbol.iterator] === 'function' ||
typeof value[Symbol.asyncIterator] === 'function'
)
);
}
function replaceValue(holder, key, value, replacer) {
if (value && typeof value.toJSON === 'function') {
value = value.toJSON();
}
if (replacer !== null) {
value = replacer.call(holder, String(key), value);
}
switch (typeof value) {
case 'function':
case 'symbol':
value = undefined;
break;
case 'object':
if (value !== null) {
const cls = value.constructor;
if (cls === String || cls === Number || cls === Boolean) {
value = value.valueOf();
}
}
break;
}
return value;
}
function normalizeReplacer(replacer) {
if (typeof replacer === 'function') {
return replacer;
}
if (Array.isArray(replacer)) {
const allowlist = new Set(replacer
.map(item => {
const cls = item && item.constructor;
return cls === String || cls === Number ? String(item) : null;
})
.filter(item => typeof item === 'string')
);
return [...allowlist];
}
return null;
}
function normalizeSpace(space) {
if (typeof space === 'number') {
if (!Number.isFinite(space) || space < 1) {
return false;
}
return ' '.repeat(Math.min(space, 10));
}
if (typeof space === 'string') {
return space.slice(0, 10) || false;
}
return false;
}
function normalizeStringifyOptions(optionsOrReplacer, space) {
if (optionsOrReplacer === null || Array.isArray(optionsOrReplacer) || typeof optionsOrReplacer !== 'object') {
optionsOrReplacer = {
replacer: optionsOrReplacer,
space
};
}
let replacer = normalizeReplacer(optionsOrReplacer.replacer);
let getKeys = Object.keys;
if (Array.isArray(replacer)) {
const allowlist = replacer;
getKeys = () => allowlist;
replacer = null;
}
return {
...optionsOrReplacer,
replacer,
getKeys,
space: normalizeSpace(optionsOrReplacer.space)
};
}
exports.isIterable = isIterable;
exports.normalizeReplacer = normalizeReplacer;
exports.normalizeSpace = normalizeSpace;
exports.normalizeStringifyOptions = normalizeStringifyOptions;
exports.replaceValue = replaceValue;

60
node_modules/@discoveryjs/json-ext/cjs/web-streams.cjs generated vendored Normal file
View File

@@ -0,0 +1,60 @@
'use strict';
const parseChunked = require('./parse-chunked.cjs');
const stringifyChunked = require('./stringify-chunked.cjs');
const utils = require('./utils.cjs');
/* eslint-env browser */
function parseFromWebStream(stream) {
// 2024/6/17: currently, an @@asyncIterator on a ReadableStream is not widely supported,
// therefore use a fallback using a reader
// https://caniuse.com/mdn-api_readablestream_--asynciterator
return parseChunked.parseChunked(utils.isIterable(stream) ? stream : async function*() {
const reader = stream.getReader();
try {
while (true) {
const { value, done } = await reader.read();
if (done) {
break;
}
yield value;
}
} finally {
reader.releaseLock();
}
});
}
function createStringifyWebStream(value, replacer, space) {
// 2024/6/17: the ReadableStream.from() static method is supported
// in Node.js 20.6+ and Firefox only
if (typeof ReadableStream.from === 'function') {
return ReadableStream.from(stringifyChunked.stringifyChunked(value, replacer, space));
}
// emulate ReadableStream.from()
return new ReadableStream({
start() {
this.generator = stringifyChunked.stringifyChunked(value, replacer, space);
},
pull(controller) {
const { value, done } = this.generator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
cancel() {
this.generator = null;
}
});
}
exports.createStringifyWebStream = createStringifyWebStream;
exports.parseFromWebStream = parseFromWebStream;

705
node_modules/@discoveryjs/json-ext/dist/json-ext.js generated vendored Normal file
View File

@@ -0,0 +1,705 @@
(function (global, factory) {
typeof exports === 'object' && typeof module !== 'undefined' ? module.exports = factory() :
typeof define === 'function' && define.amd ? define(factory) :
(global.jsonExt = factory());
}(typeof globalThis != 'undefined' ? globalThis : typeof window != 'undefined' ? window : typeof global != 'undefined' ? global : typeof self != 'undefined' ? self : this, (function () {
var exports = (() => {
var __defProp = Object.defineProperty;
var __getOwnPropDesc = Object.getOwnPropertyDescriptor;
var __getOwnPropNames = Object.getOwnPropertyNames;
var __hasOwnProp = Object.prototype.hasOwnProperty;
var __export = (target, all) => {
for (var name in all)
__defProp(target, name, { get: all[name], enumerable: true });
};
var __copyProps = (to, from, except, desc) => {
if (from && typeof from === "object" || typeof from === "function") {
for (let key of __getOwnPropNames(from))
if (!__hasOwnProp.call(to, key) && key !== except)
__defProp(to, key, { get: () => from[key], enumerable: !(desc = __getOwnPropDesc(from, key)) || desc.enumerable });
}
return to;
};
var __toCommonJS = (mod) => __copyProps(__defProp({}, "__esModule", { value: true }), mod);
// src/index.js
var src_exports = {};
__export(src_exports, {
createStringifyWebStream: () => createStringifyWebStream,
parseChunked: () => parseChunked,
parseFromWebStream: () => parseFromWebStream,
stringifyChunked: () => stringifyChunked,
stringifyInfo: () => stringifyInfo
});
// src/utils.js
function isIterable(value) {
return typeof value === "object" && value !== null && (typeof value[Symbol.iterator] === "function" || typeof value[Symbol.asyncIterator] === "function");
}
function replaceValue(holder, key, value, replacer) {
if (value && typeof value.toJSON === "function") {
value = value.toJSON();
}
if (replacer !== null) {
value = replacer.call(holder, String(key), value);
}
switch (typeof value) {
case "function":
case "symbol":
value = void 0;
break;
case "object":
if (value !== null) {
const cls = value.constructor;
if (cls === String || cls === Number || cls === Boolean) {
value = value.valueOf();
}
}
break;
}
return value;
}
function normalizeReplacer(replacer) {
if (typeof replacer === "function") {
return replacer;
}
if (Array.isArray(replacer)) {
const allowlist = new Set(
replacer.map((item) => {
const cls = item && item.constructor;
return cls === String || cls === Number ? String(item) : null;
}).filter((item) => typeof item === "string")
);
return [...allowlist];
}
return null;
}
function normalizeSpace(space) {
if (typeof space === "number") {
if (!Number.isFinite(space) || space < 1) {
return false;
}
return " ".repeat(Math.min(space, 10));
}
if (typeof space === "string") {
return space.slice(0, 10) || false;
}
return false;
}
function normalizeStringifyOptions(optionsOrReplacer, space) {
if (optionsOrReplacer === null || Array.isArray(optionsOrReplacer) || typeof optionsOrReplacer !== "object") {
optionsOrReplacer = {
replacer: optionsOrReplacer,
space
};
}
let replacer = normalizeReplacer(optionsOrReplacer.replacer);
let getKeys = Object.keys;
if (Array.isArray(replacer)) {
const allowlist = replacer;
getKeys = () => allowlist;
replacer = null;
}
return {
...optionsOrReplacer,
replacer,
getKeys,
space: normalizeSpace(optionsOrReplacer.space)
};
}
// src/parse-chunked.js
var STACK_OBJECT = 1;
var STACK_ARRAY = 2;
var decoder = new TextDecoder();
function adjustPosition(error, parser) {
if (error.name === "SyntaxError" && parser.jsonParseOffset) {
error.message = error.message.replace(
/at position (\d+)/,
(_, pos) => "at position " + (Number(pos) + parser.jsonParseOffset)
);
}
return error;
}
function append(array, elements) {
const initialLength = array.length;
array.length += elements.length;
for (let i = 0; i < elements.length; i++) {
array[initialLength + i] = elements[i];
}
}
async function parseChunked(chunkEmitter) {
const iterable = typeof chunkEmitter === "function" ? chunkEmitter() : chunkEmitter;
if (isIterable(iterable)) {
let parser = new ChunkParser();
try {
for await (const chunk of iterable) {
if (typeof chunk !== "string" && !ArrayBuffer.isView(chunk)) {
throw new TypeError("Invalid chunk: Expected string, TypedArray or Buffer");
}
parser.push(chunk);
}
return parser.finish();
} catch (e) {
throw adjustPosition(e, parser);
}
}
throw new TypeError(
"Invalid chunk emitter: Expected an Iterable, AsyncIterable, generator, async generator, or a function returning an Iterable or AsyncIterable"
);
}
var ChunkParser = class {
constructor() {
this.value = void 0;
this.valueStack = null;
this.stack = new Array(100);
this.lastFlushDepth = 0;
this.flushDepth = 0;
this.stateString = false;
this.stateStringEscape = false;
this.pendingByteSeq = null;
this.pendingChunk = null;
this.chunkOffset = 0;
this.jsonParseOffset = 0;
}
parseAndAppend(fragment, wrap) {
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
if (wrap) {
this.jsonParseOffset--;
fragment = "{" + fragment + "}";
}
Object.assign(this.valueStack.value, JSON.parse(fragment));
} else {
if (wrap) {
this.jsonParseOffset--;
fragment = "[" + fragment + "]";
}
append(this.valueStack.value, JSON.parse(fragment));
}
}
prepareAddition(fragment) {
const { value } = this.valueStack;
const expectComma = Array.isArray(value) ? value.length !== 0 : Object.keys(value).length !== 0;
if (expectComma) {
if (fragment[0] === ",") {
this.jsonParseOffset++;
return fragment.slice(1);
}
if (fragment[0] !== "}" && fragment[0] !== "]") {
this.jsonParseOffset -= 3;
return "[[]" + fragment;
}
}
return fragment;
}
flush(chunk, start, end) {
let fragment = chunk.slice(start, end);
this.jsonParseOffset = this.chunkOffset + start;
if (this.pendingChunk !== null) {
fragment = this.pendingChunk + fragment;
this.jsonParseOffset -= this.pendingChunk.length;
this.pendingChunk = null;
}
if (this.flushDepth === this.lastFlushDepth) {
if (this.flushDepth > 0) {
this.parseAndAppend(this.prepareAddition(fragment), true);
} else {
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
}
} else if (this.flushDepth > this.lastFlushDepth) {
for (let i = this.flushDepth - 1; i >= this.lastFlushDepth; i--) {
fragment += this.stack[i] === STACK_OBJECT ? "}" : "]";
}
if (this.lastFlushDepth === 0) {
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
} else {
this.parseAndAppend(this.prepareAddition(fragment), true);
}
for (let i = this.lastFlushDepth || 1; i < this.flushDepth; i++) {
let value = this.valueStack.value;
if (this.stack[i - 1] === STACK_OBJECT) {
let key;
for (key in value) ;
value = value[key];
} else {
value = value[value.length - 1];
}
this.valueStack = {
value,
prev: this.valueStack
};
}
} else {
fragment = this.prepareAddition(fragment);
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.jsonParseOffset--;
fragment = (this.stack[i] === STACK_OBJECT ? "{" : "[") + fragment;
}
this.parseAndAppend(fragment, false);
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.valueStack = this.valueStack.prev;
}
}
this.lastFlushDepth = this.flushDepth;
}
push(chunk) {
if (typeof chunk !== "string") {
if (this.pendingByteSeq !== null) {
const origRawChunk = chunk;
chunk = new Uint8Array(this.pendingByteSeq.length + origRawChunk.length);
chunk.set(this.pendingByteSeq);
chunk.set(origRawChunk, this.pendingByteSeq.length);
this.pendingByteSeq = null;
}
if (chunk[chunk.length - 1] > 127) {
for (let seqLength = 0; seqLength < chunk.length; seqLength++) {
const byte = chunk[chunk.length - 1 - seqLength];
if (byte >> 6 === 3) {
seqLength++;
if (seqLength !== 4 && byte >> 3 === 30 || seqLength !== 3 && byte >> 4 === 14 || seqLength !== 2 && byte >> 5 === 6) {
this.pendingByteSeq = chunk.slice(chunk.length - seqLength);
chunk = chunk.slice(0, -seqLength);
}
break;
}
}
}
chunk = decoder.decode(chunk);
}
const chunkLength = chunk.length;
let lastFlushPoint = 0;
let flushPoint = 0;
scan: for (let i = 0; i < chunkLength; i++) {
if (this.stateString) {
for (; i < chunkLength; i++) {
if (this.stateStringEscape) {
this.stateStringEscape = false;
} else {
switch (chunk.charCodeAt(i)) {
case 34:
this.stateString = false;
continue scan;
case 92:
this.stateStringEscape = true;
}
}
}
break;
}
switch (chunk.charCodeAt(i)) {
case 34:
this.stateString = true;
this.stateStringEscape = false;
break;
case 44:
flushPoint = i;
break;
case 123:
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_OBJECT;
break;
case 91:
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_ARRAY;
break;
case 93:
/* ] */
case 125:
flushPoint = i + 1;
this.flushDepth--;
if (this.flushDepth < this.lastFlushDepth) {
this.flush(chunk, lastFlushPoint, flushPoint);
lastFlushPoint = flushPoint;
}
break;
case 9:
/* \t */
case 10:
/* \n */
case 13:
/* \r */
case 32:
if (lastFlushPoint === i) {
lastFlushPoint++;
}
if (flushPoint === i) {
flushPoint++;
}
break;
}
}
if (flushPoint > lastFlushPoint) {
this.flush(chunk, lastFlushPoint, flushPoint);
}
if (flushPoint < chunkLength) {
if (this.pendingChunk !== null) {
this.pendingChunk += chunk;
} else {
this.pendingChunk = chunk.slice(flushPoint, chunkLength);
}
}
this.chunkOffset += chunkLength;
}
finish() {
if (this.pendingChunk !== null) {
this.flush("", 0, 0);
this.pendingChunk = null;
}
return this.value;
}
};
// src/stringify-chunked.js
function encodeString(value) {
if (/[^\x20\x21\x23-\x5B\x5D-\uD799]/.test(value)) {
return JSON.stringify(value);
}
return '"' + value + '"';
}
function* stringifyChunked(value, ...args) {
const { replacer, getKeys, space, ...options } = normalizeStringifyOptions(...args);
const highWaterMark = Number(options.highWaterMark) || 16384;
const keyStrings = /* @__PURE__ */ new Map();
const stack = [];
const rootValue = { "": value };
let prevState = null;
let state = () => printEntry("", value);
let stateValue = rootValue;
let stateEmpty = true;
let stateKeys = [""];
let stateIndex = 0;
let buffer = "";
while (true) {
state();
if (buffer.length >= highWaterMark || prevState === null) {
yield buffer;
buffer = "";
if (prevState === null) {
break;
}
}
}
function printObject() {
if (stateIndex === 0) {
stateKeys = getKeys(stateValue);
buffer += "{";
}
if (stateIndex === stateKeys.length) {
buffer += space && !stateEmpty ? `
${space.repeat(stack.length - 1)}}` : "}";
popState();
return;
}
const key = stateKeys[stateIndex++];
printEntry(key, stateValue[key]);
}
function printArray() {
if (stateIndex === 0) {
buffer += "[";
}
if (stateIndex === stateValue.length) {
buffer += space && !stateEmpty ? `
${space.repeat(stack.length - 1)}]` : "]";
popState();
return;
}
printEntry(stateIndex, stateValue[stateIndex++]);
}
function printEntryPrelude(key) {
if (stateEmpty) {
stateEmpty = false;
} else {
buffer += ",";
}
if (space && prevState !== null) {
buffer += `
${space.repeat(stack.length)}`;
}
if (state === printObject) {
let keyString = keyStrings.get(key);
if (keyString === void 0) {
keyStrings.set(key, keyString = encodeString(key) + (space ? ": " : ":"));
}
buffer += keyString;
}
}
function printEntry(key, value2) {
value2 = replaceValue(stateValue, key, value2, replacer);
if (value2 === null || typeof value2 !== "object") {
if (state !== printObject || value2 !== void 0) {
printEntryPrelude(key);
pushPrimitive(value2);
}
} else {
if (stack.includes(value2)) {
throw new TypeError("Converting circular structure to JSON");
}
printEntryPrelude(key);
stack.push(value2);
pushState();
state = Array.isArray(value2) ? printArray : printObject;
stateValue = value2;
stateEmpty = true;
stateIndex = 0;
}
}
function pushPrimitive(value2) {
switch (typeof value2) {
case "string":
buffer += encodeString(value2);
break;
case "number":
buffer += Number.isFinite(value2) ? String(value2) : "null";
break;
case "boolean":
buffer += value2 ? "true" : "false";
break;
case "undefined":
case "object":
buffer += "null";
break;
default:
throw new TypeError(`Do not know how to serialize a ${value2.constructor?.name || typeof value2}`);
}
}
function pushState() {
prevState = {
keys: stateKeys,
index: stateIndex,
prev: prevState
};
}
function popState() {
stack.pop();
const value2 = stack.length > 0 ? stack[stack.length - 1] : rootValue;
state = Array.isArray(value2) ? printArray : printObject;
stateValue = value2;
stateEmpty = false;
stateKeys = prevState.keys;
stateIndex = prevState.index;
prevState = prevState.prev;
}
}
// src/stringify-info.js
var hasOwn = typeof Object.hasOwn === "function" ? Object.hasOwn : (object, key) => Object.hasOwnProperty.call(object, key);
var escapableCharCodeSubstitution = {
// JSON Single Character Escape Sequences
8: "\\b",
9: "\\t",
10: "\\n",
12: "\\f",
13: "\\r",
34: '\\"',
92: "\\\\"
};
var charLength2048 = Uint8Array.from({ length: 2048 }, (_, code) => {
if (hasOwn(escapableCharCodeSubstitution, code)) {
return 2;
}
if (code < 32) {
return 6;
}
return code < 128 ? 1 : 2;
});
function isLeadingSurrogate(code) {
return code >= 55296 && code <= 56319;
}
function isTrailingSurrogate(code) {
return code >= 56320 && code <= 57343;
}
function stringLength(str) {
if (!/[^\x20\x21\x23-\x5B\x5D-\x7F]/.test(str)) {
return str.length + 2;
}
let len = 0;
let prevLeadingSurrogate = false;
for (let i = 0; i < str.length; i++) {
const code = str.charCodeAt(i);
if (code < 2048) {
len += charLength2048[code];
} else if (isLeadingSurrogate(code)) {
len += 6;
prevLeadingSurrogate = true;
continue;
} else if (isTrailingSurrogate(code)) {
len = prevLeadingSurrogate ? len - 2 : len + 6;
} else {
len += 3;
}
prevLeadingSurrogate = false;
}
return len + 2;
}
function intLength(num) {
let len = 0;
if (num < 0) {
len = 1;
num = -num;
}
if (num >= 1e9) {
len += 9;
num = (num - num % 1e9) / 1e9;
}
if (num >= 1e4) {
if (num >= 1e6) {
return len + (num >= 1e8 ? 9 : num >= 1e7 ? 8 : 7);
}
return len + (num >= 1e5 ? 6 : 5);
}
return len + (num >= 100 ? num >= 1e3 ? 4 : 3 : num >= 10 ? 2 : 1);
}
function primitiveLength(value) {
switch (typeof value) {
case "string":
return stringLength(value);
case "number":
return Number.isFinite(value) ? Number.isInteger(value) ? intLength(value) : String(value).length : 4;
case "boolean":
return value ? 4 : 5;
case "undefined":
case "object":
return 4;
/* null */
default:
return 0;
}
}
function stringifyInfo(value, ...args) {
const { replacer, getKeys, ...options } = normalizeStringifyOptions(...args);
const continueOnCircular = Boolean(options.continueOnCircular);
const space = options.space?.length || 0;
const keysLength = /* @__PURE__ */ new Map();
const visited = /* @__PURE__ */ new Map();
const circular = /* @__PURE__ */ new Set();
const stack = [];
const root = { "": value };
let stop = false;
let bytes = 0;
let spaceBytes = 0;
let objects = 0;
walk(root, "", value);
if (bytes === 0) {
bytes += 9;
}
return {
bytes: isNaN(bytes) ? Infinity : bytes + spaceBytes,
spaceBytes: space > 0 && isNaN(bytes) ? Infinity : spaceBytes,
circular: [...circular]
};
function walk(holder, key, value2) {
if (stop) {
return;
}
value2 = replaceValue(holder, key, value2, replacer);
if (value2 === null || typeof value2 !== "object") {
if (value2 !== void 0 || Array.isArray(holder)) {
bytes += primitiveLength(value2);
}
} else {
if (stack.includes(value2)) {
circular.add(value2);
bytes += 4;
if (!continueOnCircular) {
stop = true;
}
return;
}
if (visited.has(value2)) {
bytes += visited.get(value2);
return;
}
objects++;
const prevObjects = objects;
const valueBytes = bytes;
let valueLength = 0;
stack.push(value2);
if (Array.isArray(value2)) {
valueLength = value2.length;
for (let i = 0; i < valueLength; i++) {
walk(value2, i, value2[i]);
}
} else {
let prevLength = bytes;
for (const key2 of getKeys(value2)) {
walk(value2, key2, value2[key2]);
if (prevLength !== bytes) {
let keyLen = keysLength.get(key2);
if (keyLen === void 0) {
keysLength.set(key2, keyLen = stringLength(key2) + 1);
}
bytes += keyLen;
valueLength++;
prevLength = bytes;
}
}
}
bytes += valueLength === 0 ? 2 : 1 + valueLength;
if (space > 0 && valueLength > 0) {
spaceBytes += // a space between ":" and a value for each object entry
(Array.isArray(value2) ? 0 : valueLength) + // the formula results from folding the following components:
// - for each key-value or element: ident + newline
// (1 + stack.length * space) * valueLength
// - ident (one space less) before "}" or "]" + newline
// (stack.length - 1) * space + 1
(1 + stack.length * space) * (valueLength + 1) - space;
}
stack.pop();
if (prevObjects !== objects) {
visited.set(value2, bytes - valueBytes);
}
}
}
}
// src/web-streams.js
function parseFromWebStream(stream) {
return parseChunked(isIterable(stream) ? stream : async function* () {
const reader = stream.getReader();
try {
while (true) {
const { value, done } = await reader.read();
if (done) {
break;
}
yield value;
}
} finally {
reader.releaseLock();
}
});
}
function createStringifyWebStream(value, replacer, space) {
if (typeof ReadableStream.from === "function") {
return ReadableStream.from(stringifyChunked(value, replacer, space));
}
return new ReadableStream({
start() {
this.generator = stringifyChunked(value, replacer, space);
},
pull(controller) {
const { value: value2, done } = this.generator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value2);
}
},
cancel() {
this.generator = null;
}
});
}
return __toCommonJS(src_exports);
})();
return exports;
})));

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

3
node_modules/@discoveryjs/json-ext/dist/package.json generated vendored Normal file
View File

@@ -0,0 +1,3 @@
{
"type": "commonjs"
}

37
node_modules/@discoveryjs/json-ext/index.d.ts generated vendored Normal file
View File

@@ -0,0 +1,37 @@
declare module '@discoveryjs/json-ext' {
type Chunk = string | Uint8Array | Buffer;
type Replacer =
| ((this: any, key: string, value: any) => any)
| (string | number)[]
| null;
type Space = string | number | null;
type StringifyOptions = {
replacer?: Replacer;
space?: Space;
highWaterMark?: number;
};
type StringifyInfoOptions = {
replacer?: Replacer;
space?: Space;
continueOnCircular?: boolean;
}
type StringifyInfoResult = {
bytes: number;
spaceBytes: number;
circular: object[];
};
export function parseChunked(input: Iterable<Chunk> | AsyncIterable<Chunk>): Promise<any>;
export function parseChunked(input: () => (Iterable<Chunk> | AsyncIterable<Chunk>)): Promise<any>;
export function stringifyChunked(value: any, replacer?: Replacer, space?: Space): Generator<string>;
export function stringifyChunked(value: any, options: StringifyOptions): Generator<string>;
export function stringifyInfo(value: any, replacer?: Replacer, space?: Space): StringifyInfoResult;
export function stringifyInfo(value: any, options?: StringifyInfoOptions): StringifyInfoResult;
// Web streams
export function parseFromWebStream(stream: ReadableStream<Chunk>): Promise<any>;
export function createStringifyWebStream(value: any, replacer?: Replacer, space?: Space): ReadableStream<string>;
export function createStringifyWebStream(value: any, options: StringifyOptions): ReadableStream<string>;
}

68
node_modules/@discoveryjs/json-ext/package.json generated vendored Normal file
View File

@@ -0,0 +1,68 @@
{
"name": "@discoveryjs/json-ext",
"version": "0.6.3",
"description": "A set of utilities that extend the use of JSON",
"keywords": [
"json",
"utils",
"stream",
"async",
"promise",
"stringify",
"info"
],
"author": "Roman Dvornov <rdvornov@gmail.com> (https://github.com/lahmatiy)",
"license": "MIT",
"repository": {
"type": "git",
"url": "git+https://github.com/discoveryjs/json-ext.git"
},
"engines": {
"node": ">=14.17.0"
},
"type": "module",
"main": "./cjs/index.cjs",
"module": "./src/index.js",
"types": "./index.d.ts",
"exports": {
".": {
"types": "./index.d.ts",
"require": "./cjs/index.cjs",
"import": "./src/index.js"
},
"./dist/*": "./dist/*",
"./package.json": "./package.json"
},
"scripts": {
"test": "npm run test:src",
"lint": "eslint src",
"lint-and-test": "npm run lint && npm test",
"bundle": "node scripts/bundle.js",
"transpile": "node scripts/transpile.cjs",
"test:all": "npm run test:src && npm run test:cjs && npm run test:dist && npm run test:e2e",
"test:src": "mocha --reporter progress src/*.test.js",
"test:cjs": "mocha --reporter progress cjs/*.test.cjs",
"test:e2e": "mocha --reporter progress test-e2e",
"test:dist": "mocha --reporter progress dist/test",
"test:deno": "node scripts/deno-adapt-test.js && mocha --reporter progress deno-tests/*.test.js",
"bundle-and-test": "npm run bundle && npm run test:dist",
"coverage": "c8 --reporter=lcovonly npm test",
"prepublishOnly": "npm run lint && npm run bundle && npm run transpile && npm run test:all"
},
"devDependencies": {
"c8": "^7.10.0",
"chalk": "^4.1.0",
"esbuild": "^0.24.0",
"eslint": "^8.57.0",
"mocha": "^9.2.2",
"rollup": "^2.79.2"
},
"files": [
"cjs",
"!cjs/*{.test,-cases}.cjs",
"dist",
"src",
"!src/*{.test,-cases}.js",
"index.d.ts"
]
}

4
node_modules/@discoveryjs/json-ext/src/index.js generated vendored Normal file
View File

@@ -0,0 +1,4 @@
export { parseChunked } from './parse-chunked.js';
export { stringifyChunked } from './stringify-chunked.js';
export { stringifyInfo } from './stringify-info.js';
export { createStringifyWebStream, parseFromWebStream } from './web-streams.js';

352
node_modules/@discoveryjs/json-ext/src/parse-chunked.js generated vendored Normal file
View File

@@ -0,0 +1,352 @@
import { isIterable } from './utils.js';
const STACK_OBJECT = 1;
const STACK_ARRAY = 2;
const decoder = new TextDecoder();
function adjustPosition(error, parser) {
if (error.name === 'SyntaxError' && parser.jsonParseOffset) {
error.message = error.message.replace(/at position (\d+)/, (_, pos) =>
'at position ' + (Number(pos) + parser.jsonParseOffset)
);
}
return error;
}
function append(array, elements) {
// Note: Avoid to use array.push(...elements) since it may lead to
// "RangeError: Maximum call stack size exceeded" for a long arrays
const initialLength = array.length;
array.length += elements.length;
for (let i = 0; i < elements.length; i++) {
array[initialLength + i] = elements[i];
}
}
export async function parseChunked(chunkEmitter) {
const iterable = typeof chunkEmitter === 'function'
? chunkEmitter()
: chunkEmitter;
if (isIterable(iterable)) {
let parser = new ChunkParser();
try {
for await (const chunk of iterable) {
if (typeof chunk !== 'string' && !ArrayBuffer.isView(chunk)) {
throw new TypeError('Invalid chunk: Expected string, TypedArray or Buffer');
}
parser.push(chunk);
}
return parser.finish();
} catch (e) {
throw adjustPosition(e, parser);
}
}
throw new TypeError(
'Invalid chunk emitter: Expected an Iterable, AsyncIterable, generator, ' +
'async generator, or a function returning an Iterable or AsyncIterable'
);
};
class ChunkParser {
constructor() {
this.value = undefined;
this.valueStack = null;
this.stack = new Array(100);
this.lastFlushDepth = 0;
this.flushDepth = 0;
this.stateString = false;
this.stateStringEscape = false;
this.pendingByteSeq = null;
this.pendingChunk = null;
this.chunkOffset = 0;
this.jsonParseOffset = 0;
}
parseAndAppend(fragment, wrap) {
// Append new entries or elements
if (this.stack[this.lastFlushDepth - 1] === STACK_OBJECT) {
if (wrap) {
this.jsonParseOffset--;
fragment = '{' + fragment + '}';
}
Object.assign(this.valueStack.value, JSON.parse(fragment));
} else {
if (wrap) {
this.jsonParseOffset--;
fragment = '[' + fragment + ']';
}
append(this.valueStack.value, JSON.parse(fragment));
}
}
prepareAddition(fragment) {
const { value } = this.valueStack;
const expectComma = Array.isArray(value)
? value.length !== 0
: Object.keys(value).length !== 0;
if (expectComma) {
// Skip a comma at the beginning of fragment, otherwise it would
// fail to parse
if (fragment[0] === ',') {
this.jsonParseOffset++;
return fragment.slice(1);
}
// When value (an object or array) is not empty and a fragment
// doesn't start with a comma, a single valid fragment starting
// is a closing bracket. If it's not, a prefix is adding to fail
// parsing. Otherwise, the sequence of chunks can be successfully
// parsed, although it should not, e.g. ["[{}", "{}]"]
if (fragment[0] !== '}' && fragment[0] !== ']') {
this.jsonParseOffset -= 3;
return '[[]' + fragment;
}
}
return fragment;
}
flush(chunk, start, end) {
let fragment = chunk.slice(start, end);
// Save position correction an error in JSON.parse() if any
this.jsonParseOffset = this.chunkOffset + start;
// Prepend pending chunk if any
if (this.pendingChunk !== null) {
fragment = this.pendingChunk + fragment;
this.jsonParseOffset -= this.pendingChunk.length;
this.pendingChunk = null;
}
if (this.flushDepth === this.lastFlushDepth) {
// Depth didn't changed, so it's a root value or entry/element set
if (this.flushDepth > 0) {
this.parseAndAppend(this.prepareAddition(fragment), true);
} else {
// That's an entire value on a top level
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
}
} else if (this.flushDepth > this.lastFlushDepth) {
// Add missed closing brackets/parentheses
for (let i = this.flushDepth - 1; i >= this.lastFlushDepth; i--) {
fragment += this.stack[i] === STACK_OBJECT ? '}' : ']';
}
if (this.lastFlushDepth === 0) {
// That's a root value
this.value = JSON.parse(fragment);
this.valueStack = {
value: this.value,
prev: null
};
} else {
this.parseAndAppend(this.prepareAddition(fragment), true);
}
// Move down to the depths to the last object/array, which is current now
for (let i = this.lastFlushDepth || 1; i < this.flushDepth; i++) {
let value = this.valueStack.value;
if (this.stack[i - 1] === STACK_OBJECT) {
// find last entry
let key;
// eslint-disable-next-line curly
for (key in value);
value = value[key];
} else {
// last element
value = value[value.length - 1];
}
this.valueStack = {
value,
prev: this.valueStack
};
}
} else /* this.flushDepth < this.lastFlushDepth */ {
fragment = this.prepareAddition(fragment);
// Add missed opening brackets/parentheses
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.jsonParseOffset--;
fragment = (this.stack[i] === STACK_OBJECT ? '{' : '[') + fragment;
}
this.parseAndAppend(fragment, false);
for (let i = this.lastFlushDepth - 1; i >= this.flushDepth; i--) {
this.valueStack = this.valueStack.prev;
}
}
this.lastFlushDepth = this.flushDepth;
}
push(chunk) {
if (typeof chunk !== 'string') {
// Suppose chunk is Buffer or Uint8Array
// Prepend uncompleted byte sequence if any
if (this.pendingByteSeq !== null) {
const origRawChunk = chunk;
chunk = new Uint8Array(this.pendingByteSeq.length + origRawChunk.length);
chunk.set(this.pendingByteSeq);
chunk.set(origRawChunk, this.pendingByteSeq.length);
this.pendingByteSeq = null;
}
// In case Buffer/Uint8Array, an input is encoded in UTF8
// Seek for parts of uncompleted UTF8 symbol on the ending
// This makes sense only if we expect more chunks and last char is not multi-bytes
if (chunk[chunk.length - 1] > 127) {
for (let seqLength = 0; seqLength < chunk.length; seqLength++) {
const byte = chunk[chunk.length - 1 - seqLength];
// 10xxxxxx - 2nd, 3rd or 4th byte
// 110xxxxx first byte of 2-byte sequence
// 1110xxxx - first byte of 3-byte sequence
// 11110xxx - first byte of 4-byte sequence
if (byte >> 6 === 3) {
seqLength++;
// If the sequence is really incomplete, then preserve it
// for the future chunk and cut off it from the current chunk
if ((seqLength !== 4 && byte >> 3 === 0b11110) ||
(seqLength !== 3 && byte >> 4 === 0b1110) ||
(seqLength !== 2 && byte >> 5 === 0b110)) {
this.pendingByteSeq = chunk.slice(chunk.length - seqLength);
chunk = chunk.slice(0, -seqLength);
}
break;
}
}
}
// Convert chunk to a string, since single decode per chunk
// is much effective than decode multiple small substrings
chunk = decoder.decode(chunk);
}
const chunkLength = chunk.length;
let lastFlushPoint = 0;
let flushPoint = 0;
// Main scan loop
scan: for (let i = 0; i < chunkLength; i++) {
if (this.stateString) {
for (; i < chunkLength; i++) {
if (this.stateStringEscape) {
this.stateStringEscape = false;
} else {
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = false;
continue scan;
case 0x5C: /* \ */
this.stateStringEscape = true;
}
}
}
break;
}
switch (chunk.charCodeAt(i)) {
case 0x22: /* " */
this.stateString = true;
this.stateStringEscape = false;
break;
case 0x2C: /* , */
flushPoint = i;
break;
case 0x7B: /* { */
// Open an object
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_OBJECT;
break;
case 0x5B: /* [ */
// Open an array
flushPoint = i + 1;
this.stack[this.flushDepth++] = STACK_ARRAY;
break;
case 0x5D: /* ] */
case 0x7D: /* } */
// Close an object or array
flushPoint = i + 1;
this.flushDepth--;
if (this.flushDepth < this.lastFlushDepth) {
this.flush(chunk, lastFlushPoint, flushPoint);
lastFlushPoint = flushPoint;
}
break;
case 0x09: /* \t */
case 0x0A: /* \n */
case 0x0D: /* \r */
case 0x20: /* space */
// Move points forward when they points on current position and it's a whitespace
if (lastFlushPoint === i) {
lastFlushPoint++;
}
if (flushPoint === i) {
flushPoint++;
}
break;
}
}
if (flushPoint > lastFlushPoint) {
this.flush(chunk, lastFlushPoint, flushPoint);
}
// Produce pendingChunk if something left
if (flushPoint < chunkLength) {
if (this.pendingChunk !== null) {
// When there is already a pending chunk then no flush happened,
// appending entire chunk to pending one
this.pendingChunk += chunk;
} else {
// Create a pending chunk, it will start with non-whitespace since
// flushPoint was moved forward away from whitespaces on scan
this.pendingChunk = chunk.slice(flushPoint, chunkLength);
}
}
this.chunkOffset += chunkLength;
}
finish() {
if (this.pendingChunk !== null) {
this.flush('', 0, 0);
this.pendingChunk = null;
}
return this.value;
}
};

View File

@@ -0,0 +1,171 @@
import { normalizeStringifyOptions, replaceValue } from './utils.js';
function encodeString(value) {
if (/[^\x20\x21\x23-\x5B\x5D-\uD799]/.test(value)) { // [^\x20-\uD799]|[\x22\x5c]
return JSON.stringify(value);
}
return '"' + value + '"';
}
export function* stringifyChunked(value, ...args) {
const { replacer, getKeys, space, ...options } = normalizeStringifyOptions(...args);
const highWaterMark = Number(options.highWaterMark) || 0x4000; // 16kb by default
const keyStrings = new Map();
const stack = [];
const rootValue = { '': value };
let prevState = null;
let state = () => printEntry('', value);
let stateValue = rootValue;
let stateEmpty = true;
let stateKeys = [''];
let stateIndex = 0;
let buffer = '';
while (true) {
state();
if (buffer.length >= highWaterMark || prevState === null) {
// flush buffer
yield buffer;
buffer = '';
if (prevState === null) {
break;
}
}
}
function printObject() {
if (stateIndex === 0) {
stateKeys = getKeys(stateValue);
buffer += '{';
}
// when no keys left
if (stateIndex === stateKeys.length) {
buffer += space && !stateEmpty
? `\n${space.repeat(stack.length - 1)}}`
: '}';
popState();
return;
}
const key = stateKeys[stateIndex++];
printEntry(key, stateValue[key]);
}
function printArray() {
if (stateIndex === 0) {
buffer += '[';
}
if (stateIndex === stateValue.length) {
buffer += space && !stateEmpty
? `\n${space.repeat(stack.length - 1)}]`
: ']';
popState();
return;
}
printEntry(stateIndex, stateValue[stateIndex++]);
}
function printEntryPrelude(key) {
if (stateEmpty) {
stateEmpty = false;
} else {
buffer += ',';
}
if (space && prevState !== null) {
buffer += `\n${space.repeat(stack.length)}`;
}
if (state === printObject) {
let keyString = keyStrings.get(key);
if (keyString === undefined) {
keyStrings.set(key, keyString = encodeString(key) + (space ? ': ' : ':'));
}
buffer += keyString;
}
}
function printEntry(key, value) {
value = replaceValue(stateValue, key, value, replacer);
if (value === null || typeof value !== 'object') {
// primitive
if (state !== printObject || value !== undefined) {
printEntryPrelude(key);
pushPrimitive(value);
}
} else {
// If the visited set does not change after adding a value, then it is already in the set
if (stack.includes(value)) {
throw new TypeError('Converting circular structure to JSON');
}
printEntryPrelude(key);
stack.push(value);
pushState();
state = Array.isArray(value) ? printArray : printObject;
stateValue = value;
stateEmpty = true;
stateIndex = 0;
}
}
function pushPrimitive(value) {
switch (typeof value) {
case 'string':
buffer += encodeString(value);
break;
case 'number':
buffer += Number.isFinite(value) ? String(value) : 'null';
break;
case 'boolean':
buffer += value ? 'true' : 'false';
break;
case 'undefined':
case 'object': // typeof null === 'object'
buffer += 'null';
break;
default:
throw new TypeError(`Do not know how to serialize a ${value.constructor?.name || typeof value}`);
}
}
function pushState() {
prevState = {
keys: stateKeys,
index: stateIndex,
prev: prevState
};
}
function popState() {
stack.pop();
const value = stack.length > 0 ? stack[stack.length - 1] : rootValue;
// restore state
state = Array.isArray(value) ? printArray : printObject;
stateValue = value;
stateEmpty = false;
stateKeys = prevState.keys;
stateIndex = prevState.index;
// pop state
prevState = prevState.prev;
}
};

View File

@@ -0,0 +1,247 @@
import { normalizeStringifyOptions, replaceValue } from './utils.js';
const hasOwn = typeof Object.hasOwn === 'function'
? Object.hasOwn
: (object, key) => Object.hasOwnProperty.call(object, key);
// https://tc39.es/ecma262/#table-json-single-character-escapes
const escapableCharCodeSubstitution = { // JSON Single Character Escape Sequences
0x08: '\\b',
0x09: '\\t',
0x0a: '\\n',
0x0c: '\\f',
0x0d: '\\r',
0x22: '\\\"',
0x5c: '\\\\'
};
const charLength2048 = Uint8Array.from({ length: 2048 }, (_, code) => {
if (hasOwn(escapableCharCodeSubstitution, code)) {
return 2; // \X
}
if (code < 0x20) {
return 6; // \uXXXX
}
return code < 128 ? 1 : 2; // UTF8 bytes
});
function isLeadingSurrogate(code) {
return code >= 0xD800 && code <= 0xDBFF;
}
function isTrailingSurrogate(code) {
return code >= 0xDC00 && code <= 0xDFFF;
}
function stringLength(str) {
// Fast path to compute length when a string contains only characters encoded as single bytes
if (!/[^\x20\x21\x23-\x5B\x5D-\x7F]/.test(str)) {
return str.length + 2;
}
let len = 0;
let prevLeadingSurrogate = false;
for (let i = 0; i < str.length; i++) {
const code = str.charCodeAt(i);
if (code < 2048) {
len += charLength2048[code];
} else if (isLeadingSurrogate(code)) {
len += 6; // \uXXXX since no pair with trailing surrogate yet
prevLeadingSurrogate = true;
continue;
} else if (isTrailingSurrogate(code)) {
len = prevLeadingSurrogate
? len - 2 // surrogate pair (4 bytes), since we calculate prev leading surrogate as 6 bytes, substruct 2 bytes
: len + 6; // \uXXXX
} else {
len += 3; // code >= 2048 is 3 bytes length for UTF8
}
prevLeadingSurrogate = false;
}
return len + 2; // +2 for quotes
}
// avoid producing a string from a number
function intLength(num) {
let len = 0;
if (num < 0) {
len = 1;
num = -num;
}
if (num >= 1e9) {
len += 9;
num = (num - num % 1e9) / 1e9;
}
if (num >= 1e4) {
if (num >= 1e6) {
return len + (num >= 1e8
? 9
: num >= 1e7 ? 8 : 7
);
}
return len + (num >= 1e5 ? 6 : 5);
}
return len + (num >= 1e2
? num >= 1e3 ? 4 : 3
: num >= 10 ? 2 : 1
);
};
function primitiveLength(value) {
switch (typeof value) {
case 'string':
return stringLength(value);
case 'number':
return Number.isFinite(value)
? Number.isInteger(value)
? intLength(value)
: String(value).length
: 4 /* null */;
case 'boolean':
return value ? 4 /* true */ : 5 /* false */;
case 'undefined':
case 'object':
return 4; /* null */
default:
return 0;
}
}
export function stringifyInfo(value, ...args) {
const { replacer, getKeys, ...options } = normalizeStringifyOptions(...args);
const continueOnCircular = Boolean(options.continueOnCircular);
const space = options.space?.length || 0;
const keysLength = new Map();
const visited = new Map();
const circular = new Set();
const stack = [];
const root = { '': value };
let stop = false;
let bytes = 0;
let spaceBytes = 0;
let objects = 0;
walk(root, '', value);
// when value is undefined or replaced for undefined
if (bytes === 0) {
bytes += 9; // FIXME: that's the length of undefined, should we normalize behaviour to convert it to null?
}
return {
bytes: isNaN(bytes) ? Infinity : bytes + spaceBytes,
spaceBytes: space > 0 && isNaN(bytes) ? Infinity : spaceBytes,
circular: [...circular]
};
function walk(holder, key, value) {
if (stop) {
return;
}
value = replaceValue(holder, key, value, replacer);
if (value === null || typeof value !== 'object') {
// primitive
if (value !== undefined || Array.isArray(holder)) {
bytes += primitiveLength(value);
}
} else {
// check for circular references
if (stack.includes(value)) {
circular.add(value);
bytes += 4; // treat as null
if (!continueOnCircular) {
stop = true;
}
return;
}
// Using 'visited' allows avoiding hang-ups in cases of highly interconnected object graphs;
// for example, a list of git commits with references to parents can lead to N^2 complexity for traversal,
// and N when 'visited' is used
if (visited.has(value)) {
bytes += visited.get(value);
return;
}
objects++;
const prevObjects = objects;
const valueBytes = bytes;
let valueLength = 0;
stack.push(value);
if (Array.isArray(value)) {
// array
valueLength = value.length;
for (let i = 0; i < valueLength; i++) {
walk(value, i, value[i]);
}
} else {
// object
let prevLength = bytes;
for (const key of getKeys(value)) {
walk(value, key, value[key]);
if (prevLength !== bytes) {
let keyLen = keysLength.get(key);
if (keyLen === undefined) {
keysLength.set(key, keyLen = stringLength(key) + 1); // "key":
}
// value is printed
bytes += keyLen;
valueLength++;
prevLength = bytes;
}
}
}
bytes += valueLength === 0
? 2 // {} or []
: 1 + valueLength; // {} or [] + commas
if (space > 0 && valueLength > 0) {
spaceBytes +=
// a space between ":" and a value for each object entry
(Array.isArray(value) ? 0 : valueLength) +
// the formula results from folding the following components:
// - for each key-value or element: ident + newline
// (1 + stack.length * space) * valueLength
// - ident (one space less) before "}" or "]" + newline
// (stack.length - 1) * space + 1
(1 + stack.length * space) * (valueLength + 1) - space;
}
stack.pop();
// add to 'visited' only objects that contain nested objects
if (prevObjects !== objects) {
visited.set(value, bytes - valueBytes);
}
}
}
};

100
node_modules/@discoveryjs/json-ext/src/utils.js generated vendored Normal file
View File

@@ -0,0 +1,100 @@
export function isIterable(value) {
return (
typeof value === 'object' &&
value !== null &&
(
typeof value[Symbol.iterator] === 'function' ||
typeof value[Symbol.asyncIterator] === 'function'
)
);
}
export function replaceValue(holder, key, value, replacer) {
if (value && typeof value.toJSON === 'function') {
value = value.toJSON();
}
if (replacer !== null) {
value = replacer.call(holder, String(key), value);
}
switch (typeof value) {
case 'function':
case 'symbol':
value = undefined;
break;
case 'object':
if (value !== null) {
const cls = value.constructor;
if (cls === String || cls === Number || cls === Boolean) {
value = value.valueOf();
}
}
break;
}
return value;
}
export function normalizeReplacer(replacer) {
if (typeof replacer === 'function') {
return replacer;
}
if (Array.isArray(replacer)) {
const allowlist = new Set(replacer
.map(item => {
const cls = item && item.constructor;
return cls === String || cls === Number ? String(item) : null;
})
.filter(item => typeof item === 'string')
);
return [...allowlist];
}
return null;
}
export function normalizeSpace(space) {
if (typeof space === 'number') {
if (!Number.isFinite(space) || space < 1) {
return false;
}
return ' '.repeat(Math.min(space, 10));
}
if (typeof space === 'string') {
return space.slice(0, 10) || false;
}
return false;
}
export function normalizeStringifyOptions(optionsOrReplacer, space) {
if (optionsOrReplacer === null || Array.isArray(optionsOrReplacer) || typeof optionsOrReplacer !== 'object') {
optionsOrReplacer = {
replacer: optionsOrReplacer,
space
};
}
let replacer = normalizeReplacer(optionsOrReplacer.replacer);
let getKeys = Object.keys;
if (Array.isArray(replacer)) {
const allowlist = replacer;
getKeys = () => allowlist;
replacer = null;
}
return {
...optionsOrReplacer,
replacer,
getKeys,
space: normalizeSpace(optionsOrReplacer.space)
};
}

54
node_modules/@discoveryjs/json-ext/src/web-streams.js generated vendored Normal file
View File

@@ -0,0 +1,54 @@
/* eslint-env browser */
import { parseChunked } from './parse-chunked.js';
import { stringifyChunked } from './stringify-chunked.js';
import { isIterable } from './utils.js';
export function parseFromWebStream(stream) {
// 2024/6/17: currently, an @@asyncIterator on a ReadableStream is not widely supported,
// therefore use a fallback using a reader
// https://caniuse.com/mdn-api_readablestream_--asynciterator
return parseChunked(isIterable(stream) ? stream : async function*() {
const reader = stream.getReader();
try {
while (true) {
const { value, done } = await reader.read();
if (done) {
break;
}
yield value;
}
} finally {
reader.releaseLock();
}
});
}
export function createStringifyWebStream(value, replacer, space) {
// 2024/6/17: the ReadableStream.from() static method is supported
// in Node.js 20.6+ and Firefox only
if (typeof ReadableStream.from === 'function') {
return ReadableStream.from(stringifyChunked(value, replacer, space));
}
// emulate ReadableStream.from()
return new ReadableStream({
start() {
this.generator = stringifyChunked(value, replacer, space);
},
pull(controller) {
const { value, done } = this.generator.next();
if (done) {
controller.close();
} else {
controller.enqueue(value);
}
},
cancel() {
this.generator = null;
}
});
};