Initial commit: notification-elements-demo app

Interactive Angular 19 demo for @sda/notification-elements-ui with
6 sections: Bell & Feed, Notification Center, Inbox, Comments &
Threads, Mention Input, and Full-Featured layout. Includes mock
data, dark mode toggle, and real-time event log.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Giuliano Silvestro
2026-02-13 21:49:19 +10:00
commit 5d0c9ec7eb
36473 changed files with 3778146 additions and 0 deletions

16
node_modules/cacache/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,16 @@
ISC License
Copyright (c) npm, Inc.
Permission to use, copy, modify, and/or distribute this software for
any purpose with or without fee is hereby granted, provided that the
above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE COPYRIGHT HOLDER DISCLAIMS
ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE
COPYRIGHT HOLDER BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR
CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE
OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE
USE OR PERFORMANCE OF THIS SOFTWARE.

715
node_modules/cacache/README.md generated vendored Normal file
View File

@@ -0,0 +1,715 @@
# cacache [![npm version](https://img.shields.io/npm/v/cacache.svg)](https://npm.im/cacache) [![license](https://img.shields.io/npm/l/cacache.svg)](https://npm.im/cacache) [![Travis](https://img.shields.io/travis/npm/cacache.svg)](https://travis-ci.org/npm/cacache) [![AppVeyor](https://ci.appveyor.com/api/projects/status/github/npm/cacache?svg=true)](https://ci.appveyor.com/project/npm/cacache) [![Coverage Status](https://coveralls.io/repos/github/npm/cacache/badge.svg?branch=latest)](https://coveralls.io/github/npm/cacache?branch=latest)
[`cacache`](https://github.com/npm/cacache) is a Node.js library for managing
local key and content address caches. It's really fast, really good at
concurrency, and it will never give you corrupted data, even if cache files
get corrupted or manipulated.
On systems that support user and group settings on files, cacache will
match the `uid` and `gid` values to the folder where the cache lives, even
when running as `root`.
It was written to be used as [npm](https://npm.im)'s local cache, but can
just as easily be used on its own.
## Install
`$ npm install --save cacache`
## Table of Contents
* [Example](#example)
* [Features](#features)
* [Contributing](#contributing)
* [API](#api)
* [Using localized APIs](#localized-api)
* Reading
* [`ls`](#ls)
* [`ls.stream`](#ls-stream)
* [`get`](#get-data)
* [`get.stream`](#get-stream)
* [`get.info`](#get-info)
* [`get.hasContent`](#get-hasContent)
* Writing
* [`put`](#put-data)
* [`put.stream`](#put-stream)
* [`rm.all`](#rm-all)
* [`rm.entry`](#rm-entry)
* [`rm.content`](#rm-content)
* [`index.compact`](#index-compact)
* [`index.insert`](#index-insert)
* Utilities
* [`clearMemoized`](#clear-memoized)
* [`tmp.mkdir`](#tmp-mkdir)
* [`tmp.withTmp`](#with-tmp)
* Integrity
* [Subresource Integrity](#integrity)
* [`verify`](#verify)
* [`verify.lastRun`](#verify-last-run)
### Example
```javascript
const cacache = require('cacache')
const fs = require('fs')
const cachePath = '/tmp/my-toy-cache'
const key = 'my-unique-key-1234'
// Cache it! Use `cachePath` as the root of the content cache
cacache.put(cachePath, key, '10293801983029384').then(integrity => {
console.log(`Saved content to ${cachePath}.`)
})
const destination = '/tmp/mytar.tgz'
// Copy the contents out of the cache and into their destination!
// But this time, use stream instead!
cacache.get.stream(
cachePath, key
).pipe(
fs.createWriteStream(destination)
).on('finish', () => {
console.log('done extracting!')
})
// The same thing, but skip the key index.
cacache.get.byDigest(cachePath, integrityHash).then(data => {
fs.writeFile(destination, data, err => {
console.log('tarball data fetched based on its sha512sum and written out!')
})
})
```
### Features
* Extraction by key or by content address (shasum, etc)
* [Subresource Integrity](#integrity) web standard support
* Multi-hash support - safely host sha1, sha512, etc, in a single cache
* Automatic content deduplication
* Fault tolerance (immune to corruption, partial writes, process races, etc)
* Consistency guarantees on read and write (full data verification)
* Lockless, high-concurrency cache access
* Streaming support
* Promise support
* Fast -- sub-millisecond reads and writes including verification
* Arbitrary metadata storage
* Garbage collection and additional offline verification
* Thorough test coverage
* There's probably a bloom filter in there somewhere. Those are cool, right? 🤔
### Contributing
The cacache team enthusiastically welcomes contributions and project participation! There's a bunch of things you can do if you want to contribute! Please don't hesitate to jump in if you'd like to, or even ask us questions if something isn't clear.
All participants and maintainers in this project are expected to follow [Code of Conduct](CODE_OF_CONDUCT.md), and just generally be excellent to each other.
Please refer to the [Changelog](CHANGELOG.md) for project history details, too.
Happy hacking!
### API
#### <a name="ls"></a> `> cacache.ls(cache) -> Promise<Object>`
Lists info for all entries currently in the cache as a single large object. Each
entry in the object will be keyed by the unique index key, with corresponding
[`get.info`](#get-info) objects as the values.
##### Example
```javascript
cacache.ls(cachePath).then(console.log)
// Output
{
'my-thing': {
key: 'my-thing',
integrity: 'sha512-BaSe64/EnCoDED+HAsh=='
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 4023948,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
},
'other-thing': {
key: 'other-thing',
integrity: 'sha1-ANothER+hasH=',
path: '.testcache/content/bada55',
time: 11992309289,
size: 111112
}
}
```
#### <a name="ls-stream"></a> `> cacache.ls.stream(cache) -> Readable`
Lists info for all entries currently in the cache as a single large object.
This works just like [`ls`](#ls), except [`get.info`](#get-info) entries are
returned as `'data'` events on the returned stream.
##### Example
```javascript
cacache.ls.stream(cachePath).on('data', console.log)
// Output
{
key: 'my-thing',
integrity: 'sha512-BaSe64HaSh',
path: '.testcache/content/deadbeef', // joined with `cachePath`
time: 12345698490,
size: 13423,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
{
key: 'other-thing',
integrity: 'whirlpool-WoWSoMuchSupport',
path: '.testcache/content/bada55',
time: 11992309289,
size: 498023984029
}
{
...
}
```
#### <a name="get-data"></a> `> cacache.get(cache, key, [opts]) -> Promise({data, metadata, integrity})`
Returns an object with the cached data, digest, and metadata identified by
`key`. The `data` property of this object will be a `Buffer` instance that
presumably holds some data that means something to you. I'm sure you know what
to do with it! cacache just won't care.
`integrity` is a [Subresource
Integrity](#integrity)
string. That is, a string that can be used to verify `data`, which looks like
`<hash-algorithm>-<base64-integrity-hash>`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, the promise will be rejected.
A sub-function, `get.byDigest` may be used for identical behavior, except lookup
will happen by integrity hash, bypassing the index entirely. This version of the
function *only* returns `data` itself, without any wrapper.
See: [options](#get-options)
##### Note
This function loads the entire cache entry into memory before returning it. If
you're dealing with Very Large data, consider using [`get.stream`](#get-stream)
instead.
##### Example
```javascript
// Look up by key
cache.get(cachePath, 'my-thing').then(console.log)
// Output:
{
metadata: {
thingName: 'my'
},
integrity: 'sha512-BaSe64HaSh',
data: Buffer#<deadbeef>,
size: 9320
}
// Look up by digest
cache.get.byDigest(cachePath, 'sha512-BaSe64HaSh').then(console.log)
// Output:
Buffer#<deadbeef>
```
#### <a name="get-stream"></a> `> cacache.get.stream(cache, key, [opts]) -> Readable`
Returns a [Readable Stream](https://nodejs.org/api/stream.html#stream_readable_streams) of the cached data identified by `key`.
If there is no content identified by `key`, or if the locally-stored data does
not pass the validity checksum, an error will be emitted.
`metadata` and `integrity` events will be emitted before the stream closes, if
you need to collect that extra data about the cached entry.
A sub-function, `get.stream.byDigest` may be used for identical behavior,
except lookup will happen by integrity hash, bypassing the index entirely. This
version does not emit the `metadata` and `integrity` events at all.
See: [options](#get-options)
##### Example
```javascript
// Look up by key
cache.get.stream(
cachePath, 'my-thing'
).on('metadata', metadata => {
console.log('metadata:', metadata)
}).on('integrity', integrity => {
console.log('integrity:', integrity)
}).pipe(
fs.createWriteStream('./x.tgz')
)
// Outputs:
metadata: { ... }
integrity: 'sha512-SoMeDIGest+64=='
// Look up by digest
cache.get.stream.byDigest(
cachePath, 'sha512-SoMeDIGest+64=='
).pipe(
fs.createWriteStream('./x.tgz')
)
```
#### <a name="get-info"></a> `> cacache.get.info(cache, key) -> Promise`
Looks up `key` in the cache index, returning information about the entry if
one exists.
##### Fields
* `key` - Key the entry was looked up under. Matches the `key` argument.
* `integrity` - [Subresource Integrity hash](#integrity) for the content this entry refers to.
* `path` - Filesystem path where content is stored, joined with `cache` argument.
* `time` - Timestamp the entry was first added on.
* `metadata` - User-assigned metadata associated with the entry/content.
##### Example
```javascript
cacache.get.info(cachePath, 'my-thing').then(console.log)
// Output
{
key: 'my-thing',
integrity: 'sha256-MUSTVERIFY+ALL/THINGS=='
path: '.testcache/content/deadbeef',
time: 12345698490,
size: 849234,
metadata: {
name: 'blah',
version: '1.2.3',
description: 'this was once a package but now it is my-thing'
}
}
```
#### <a name="get-hasContent"></a> `> cacache.get.hasContent(cache, integrity) -> Promise`
Looks up a [Subresource Integrity hash](#integrity) in the cache. If content
exists for this `integrity`, it will return an object, with the specific single integrity hash
that was found in `sri` key, and the size of the found content as `size`. If no content exists for this integrity, it will return `false`.
##### Example
```javascript
cacache.get.hasContent(cachePath, 'sha256-MUSTVERIFY+ALL/THINGS==').then(console.log)
// Output
{
sri: {
source: 'sha256-MUSTVERIFY+ALL/THINGS==',
algorithm: 'sha256',
digest: 'MUSTVERIFY+ALL/THINGS==',
options: []
},
size: 9001
}
cacache.get.hasContent(cachePath, 'sha521-NOT+IN/CACHE==').then(console.log)
// Output
false
```
##### <a name="get-options"></a> Options
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
##### `opts.memoize`
Default: null
If explicitly truthy, cacache will read from memory and memoize data on bulk read. If `false`, cacache will read from disk data. Reader functions by default read from in-memory cache.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
#### <a name="put-data"></a> `> cacache.put(cache, key, data, [opts]) -> Promise`
Inserts data passed to it into the cache. The returned Promise resolves with a
digest (generated according to [`opts.algorithms`](#optsalgorithms)) after the
cache entry has been successfully written.
See: [options](#put-options)
##### Example
```javascript
fetch(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).then(data => {
return cacache.put(cachePath, 'registry.npmjs.org|cacache@1.0.0', data)
}).then(integrity => {
console.log('integrity hash is', integrity)
})
```
#### <a name="put-stream"></a> `> cacache.put.stream(cache, key, [opts]) -> Writable`
Returns a [Writable
Stream](https://nodejs.org/api/stream.html#stream_writable_streams) that inserts
data written to it into the cache. Emits an `integrity` event with the digest of
written contents when it succeeds.
See: [options](#put-options)
##### Example
```javascript
request.get(
'https://registry.npmjs.org/cacache/-/cacache-1.0.0.tgz'
).pipe(
cacache.put.stream(
cachePath, 'registry.npmjs.org|cacache@1.0.0'
).on('integrity', d => console.log(`integrity digest is ${d}`))
)
```
##### <a name="put-options"></a> Options
##### `opts.metadata`
Arbitrary metadata to be attached to the inserted key.
##### `opts.size`
If provided, the data stream will be verified to check that enough data was
passed through. If there's more or less data than expected, insertion will fail
with an `EBADSIZE` error.
##### `opts.integrity`
If present, the pre-calculated digest for the inserted content. If this option
is provided and does not match the post-insertion digest, insertion will fail
with an `EINTEGRITY` error.
`algorithms` has no effect if this option is present.
##### `opts.integrityEmitter`
*Streaming only* If present, uses the provided event emitter as a source of
truth for both integrity and size. This allows use cases where integrity is
already being calculated outside of cacache to reuse that data instead of
calculating it a second time.
The emitter must emit both the `'integrity'` and `'size'` events.
NOTE: If this option is provided, you must verify that you receive the correct
integrity value yourself and emit an `'error'` event if there is a mismatch.
[ssri Integrity Streams](https://github.com/npm/ssri#integrity-stream) do this for you when given an expected integrity.
##### `opts.algorithms`
Default: ['sha512']
Hashing algorithms to use when calculating the [subresource integrity
digest](#integrity)
for inserted data. Can use any algorithm listed in `crypto.getHashes()` or
`'omakase'`/`'お任せします'` to pick a random hash algorithm on each insertion. You
may also use any anagram of `'modnar'` to use this feature.
Currently only supports one algorithm at a time (i.e., an array length of
exactly `1`). Has no effect if `opts.integrity` is present.
##### `opts.memoize`
Default: null
If provided, cacache will memoize the given cache insertion in memory, bypassing
any filesystem checks for that key or digest in future cache fetches. Nothing
will be written to the in-memory cache unless this option is explicitly truthy.
If `opts.memoize` is an object or a `Map`-like (that is, an object with `get`
and `set` methods), it will be written to instead of the global memoization
cache.
Reading from disk data can be forced by explicitly passing `memoize: false` to
the reader functions, but their default will be to read from memory.
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="rm-all"></a> `> cacache.rm.all(cache) -> Promise`
Clears the entire cache. Mainly by blowing away the cache directory itself.
##### Example
```javascript
cacache.rm.all(cachePath).then(() => {
console.log('THE APOCALYPSE IS UPON US 😱')
})
```
#### <a name="rm-entry"></a> `> cacache.rm.entry(cache, key, [opts]) -> Promise`
Alias: `cacache.rm`
Removes the index entry for `key`. Content will still be accessible if
requested directly by content address ([`get.stream.byDigest`](#get-stream)).
By default, this appends a new entry to the index with an integrity of `null`.
If `opts.removeFully` is set to `true` then the index file itself will be
physically deleted rather than appending a `null`.
To remove the content itself (which might still be used by other entries), use
[`rm.content`](#rm-content). Or, to safely vacuum any unused content, use
[`verify`](#verify).
##### Example
```javascript
cacache.rm.entry(cachePath, 'my-thing').then(() => {
console.log('I did not like it anyway')
})
```
#### <a name="rm-content"></a> `> cacache.rm.content(cache, integrity) -> Promise`
Removes the content identified by `integrity`. Any index entries referring to it
will not be usable again until the content is re-added to the cache with an
identical digest.
##### Example
```javascript
cacache.rm.content(cachePath, 'sha512-SoMeDIGest/IN+BaSE64==').then(() => {
console.log('data for my-thing is gone!')
})
```
#### <a name="index-compact"></a> `> cacache.index.compact(cache, key, matchFn, [opts]) -> Promise`
Uses `matchFn`, which must be a synchronous function that accepts two entries
and returns a boolean indicating whether or not the two entries match, to
deduplicate all entries in the cache for the given `key`.
If `opts.validateEntry` is provided, it will be called as a function with the
only parameter being a single index entry. The function must return a Boolean,
if it returns `true` the entry is considered valid and will be kept in the index,
if it returns `false` the entry will be removed from the index.
If `opts.validateEntry` is not provided, however, every entry in the index will
be deduplicated and kept until the first `null` integrity is reached, removing
all entries that were written before the `null`.
The deduplicated list of entries is both written to the index, replacing the
existing content, and returned in the Promise.
#### <a name="index-insert"></a> `> cacache.index.insert(cache, key, integrity, opts) -> Promise`
Writes an index entry to the cache for the given `key` without writing content.
It is assumed if you are using this method, you have already stored the content
some other way and you only wish to add a new index to that content. The `metadata`
and `size` properties are read from `opts` and used as part of the index entry.
Returns a Promise resolving to the newly added entry.
#### <a name="clear-memoized"></a> `> cacache.clearMemoized()`
Completely resets the in-memory entry cache.
#### <a name="tmp-mkdir"></a> `> tmp.mkdir(cache, opts) -> Promise<Path>`
Returns a unique temporary directory inside the cache's `tmp` dir. This
directory will use the same safe user assignment that all the other stuff use.
Once the directory is made, it's the user's responsibility that all files
within are given the appropriate `gid`/`uid` ownership settings to match
the rest of the cache. If not, you can ask cacache to do it for you by
calling [`tmp.fix()`](#tmp-fix), which will fix all tmp directory
permissions.
If you want automatic cleanup of this directory, use
[`tmp.withTmp()`](#with-tpm)
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
fs.writeFile(path.join(dir, 'blablabla'), Buffer#<1234>, ...)
})
```
#### <a name="tmp-fix"></a> `> tmp.fix(cache) -> Promise`
Sets the `uid` and `gid` properties on all files and folders within the tmp
folder to match the rest of the cache.
Use this after manually writing files into [`tmp.mkdir`](#tmp-mkdir) or
[`tmp.withTmp`](#with-tmp).
##### Example
```javascript
cacache.tmp.mkdir(cache).then(dir => {
writeFile(path.join(dir, 'file'), someData).then(() => {
// make sure we didn't just put a root-owned file in the cache
cacache.tmp.fix().then(() => {
// all uids and gids match now
})
})
})
```
#### <a name="with-tmp"></a> `> tmp.withTmp(cache, opts, cb) -> Promise`
Creates a temporary directory with [`tmp.mkdir()`](#tmp-mkdir) and calls `cb`
with it. The created temporary directory will be removed when the return value
of `cb()` resolves, the tmp directory will be automatically deleted once that
promise completes.
The same caveats apply when it comes to managing permissions for the tmp dir's
contents.
See: [options](#tmp-options)
##### Example
```javascript
cacache.tmp.withTmp(cache, dir => {
return fs.writeFile(path.join(dir, 'blablabla'), 'blabla contents', { encoding: 'utf8' })
}).then(() => {
// `dir` no longer exists
})
```
##### <a name="tmp-options"></a> Options
##### `opts.tmpPrefix`
Default: null
Prefix to append on the temporary directory name inside the cache's tmp dir.
#### <a name="integrity"></a> Subresource Integrity Digests
For content verification and addressing, cacache uses strings following the
[Subresource
Integrity spec](https://developer.mozilla.org/en-US/docs/Web/Security/Subresource_Integrity).
That is, any time cacache expects an `integrity` argument or option, it
should be in the format `<hashAlgorithm>-<base64-hash>`.
One deviation from the current spec is that cacache will support any hash
algorithms supported by the underlying Node.js process. You can use
`crypto.getHashes()` to see which ones you can use.
##### Generating Digests Yourself
If you have an existing content shasum, they are generally formatted as a
hexadecimal string (that is, a sha1 would look like:
`5f5513f8822fdbe5145af33b64d8d970dcf95c6e`). In order to be compatible with
cacache, you'll need to convert this to an equivalent subresource integrity
string. For this example, the corresponding hash would be:
`sha1-X1UT+IIv2+UUWvM7ZNjZcNz5XG4=`.
If you want to generate an integrity string yourself for existing data, you can
use something like this:
```javascript
const crypto = require('crypto')
const hashAlgorithm = 'sha512'
const data = 'foobarbaz'
const integrity = (
hashAlgorithm +
'-' +
crypto.createHash(hashAlgorithm).update(data).digest('base64')
)
```
You can also use [`ssri`](https://npm.im/ssri) to have a richer set of functionality
around SRI strings, including generation, parsing, and translating from existing
hex-formatted strings.
#### <a name="verify"></a> `> cacache.verify(cache, opts) -> Promise`
Checks out and fixes up your cache:
* Cleans up corrupted or invalid index entries.
* Custom entry filtering options.
* Garbage collects any content entries not referenced by the index.
* Checks integrity for all content entries and removes invalid content.
* Fixes cache ownership.
* Removes the `tmp` directory in the cache and all its contents.
When it's done, it'll return an object with various stats about the verification
process, including amount of storage reclaimed, number of valid entries, number
of entries removed, etc.
##### <a name="verify-options"></a> Options
##### `opts.concurrency`
Default: 20
Number of concurrently read files in the filesystem while doing clean up.
##### `opts.filter`
Receives a formatted entry. Return false to remove it.
Note: might be called more than once on the same entry.
##### `opts.log`
Custom logger function:
```
log: { silly () {} }
log.silly('verify', 'verifying cache at', cache)
```
##### Example
```sh
echo somegarbage >> $CACHEPATH/content/deadbeef
```
```javascript
cacache.verify(cachePath).then(stats => {
// deadbeef collected, because of invalid checksum.
console.log('cache is much nicer now! stats:', stats)
})
```
#### <a name="verify-last-run"></a> `> cacache.verify.lastRun(cache) -> Promise`
Returns a `Date` representing the last time `cacache.verify` was run on `cache`.
##### Example
```javascript
cacache.verify(cachePath).then(() => {
cacache.verify.lastRun(cachePath).then(lastTime => {
console.log('cacache.verify was last called on' + lastTime)
})
})
```

29
node_modules/cacache/lib/content/path.js generated vendored Normal file
View File

@@ -0,0 +1,29 @@
'use strict'
const contentVer = require('../../package.json')['cache-version'].content
const hashToSegments = require('../util/hash-to-segments')
const path = require('path')
const ssri = require('ssri')
// Current format of content file path:
//
// sha512-BaSE64Hex= ->
// ~/.my-cache/content-v2/sha512/ba/da/55deadbeefc0ffee
//
module.exports = contentPath
function contentPath (cache, integrity) {
const sri = ssri.parse(integrity, { single: true })
// contentPath is the *strongest* algo given
return path.join(
contentDir(cache),
sri.algorithm,
...hashToSegments(sri.hexDigest())
)
}
module.exports.contentDir = contentDir
function contentDir (cache) {
return path.join(cache, `content-v${contentVer}`)
}

165
node_modules/cacache/lib/content/read.js generated vendored Normal file
View File

@@ -0,0 +1,165 @@
'use strict'
const fs = require('fs/promises')
const fsm = require('fs-minipass')
const ssri = require('ssri')
const contentPath = require('./path')
const Pipeline = require('minipass-pipeline')
module.exports = read
const MAX_SINGLE_READ_SIZE = 64 * 1024 * 1024
async function read (cache, integrity, opts = {}) {
const { size } = opts
const { stat, cpath, sri } = await withContentSri(cache, integrity, async (cpath, sri) => {
// get size
const stat = size ? { size } : await fs.stat(cpath)
return { stat, cpath, sri }
})
if (stat.size > MAX_SINGLE_READ_SIZE) {
return readPipeline(cpath, stat.size, sri, new Pipeline()).concat()
}
const data = await fs.readFile(cpath, { encoding: null })
if (stat.size !== data.length) {
throw sizeError(stat.size, data.length)
}
if (!ssri.checkData(data, sri)) {
throw integrityError(sri, cpath)
}
return data
}
const readPipeline = (cpath, size, sri, stream) => {
stream.push(
new fsm.ReadStream(cpath, {
size,
readSize: MAX_SINGLE_READ_SIZE,
}),
ssri.integrityStream({
integrity: sri,
size,
})
)
return stream
}
module.exports.stream = readStream
module.exports.readStream = readStream
function readStream (cache, integrity, opts = {}) {
const { size } = opts
const stream = new Pipeline()
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const { stat, cpath, sri } = await withContentSri(cache, integrity, async (cpath, sri) => {
// get size
const stat = size ? { size } : await fs.stat(cpath)
return { stat, cpath, sri }
})
return readPipeline(cpath, stat.size, sri, stream)
}).catch(err => stream.emit('error', err))
return stream
}
module.exports.copy = copy
function copy (cache, integrity, dest) {
return withContentSri(cache, integrity, (cpath) => {
return fs.copyFile(cpath, dest)
})
}
module.exports.hasContent = hasContent
async function hasContent (cache, integrity) {
if (!integrity) {
return false
}
try {
return await withContentSri(cache, integrity, async (cpath, sri) => {
const stat = await fs.stat(cpath)
return { size: stat.size, sri, stat }
})
} catch (err) {
if (err.code === 'ENOENT') {
return false
}
if (err.code === 'EPERM') {
/* istanbul ignore else */
if (process.platform !== 'win32') {
throw err
} else {
return false
}
}
}
}
async function withContentSri (cache, integrity, fn) {
const sri = ssri.parse(integrity)
// If `integrity` has multiple entries, pick the first digest
// with available local data.
const algo = sri.pickAlgorithm()
const digests = sri[algo]
if (digests.length <= 1) {
const cpath = contentPath(cache, digests[0])
return fn(cpath, digests[0])
} else {
// Can't use race here because a generic error can happen before
// a ENOENT error, and can happen before a valid result
const results = await Promise.all(digests.map(async (meta) => {
try {
return await withContentSri(cache, meta, fn)
} catch (err) {
if (err.code === 'ENOENT') {
return Object.assign(
new Error('No matching content found for ' + sri.toString()),
{ code: 'ENOENT' }
)
}
return err
}
}))
// Return the first non error if it is found
const result = results.find((r) => !(r instanceof Error))
if (result) {
return result
}
// Throw the No matching content found error
const enoentError = results.find((r) => r.code === 'ENOENT')
if (enoentError) {
throw enoentError
}
// Throw generic error
throw results.find((r) => r instanceof Error)
}
}
function sizeError (expected, found) {
/* eslint-disable-next-line max-len */
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function integrityError (sri, path) {
const err = new Error(`Integrity verification failed for ${sri} (${path})`)
err.code = 'EINTEGRITY'
err.sri = sri
err.path = path
return err
}

18
node_modules/cacache/lib/content/rm.js generated vendored Normal file
View File

@@ -0,0 +1,18 @@
'use strict'
const fs = require('fs/promises')
const contentPath = require('./path')
const { hasContent } = require('./read')
module.exports = rm
async function rm (cache, integrity) {
const content = await hasContent(cache, integrity)
// ~pretty~ sure we can't end up with a content lacking sri, but be safe
if (content && content.sri) {
await fs.rm(contentPath(cache, content.sri), { recursive: true, force: true })
return true
} else {
return false
}
}

206
node_modules/cacache/lib/content/write.js generated vendored Normal file
View File

@@ -0,0 +1,206 @@
'use strict'
const events = require('events')
const contentPath = require('./path')
const fs = require('fs/promises')
const { moveFile } = require('@npmcli/fs')
const { Minipass } = require('minipass')
const Pipeline = require('minipass-pipeline')
const Flush = require('minipass-flush')
const path = require('path')
const ssri = require('ssri')
const uniqueFilename = require('unique-filename')
const fsm = require('fs-minipass')
module.exports = write
// Cache of move operations in process so we don't duplicate
const moveOperations = new Map()
async function write (cache, data, opts = {}) {
const { algorithms, size, integrity } = opts
if (typeof size === 'number' && data.length !== size) {
throw sizeError(size, data.length)
}
const sri = ssri.fromData(data, algorithms ? { algorithms } : {})
if (integrity && !ssri.checkData(data, integrity, opts)) {
throw checksumError(integrity, sri)
}
for (const algo in sri) {
const tmp = await makeTmp(cache, opts)
const hash = sri[algo].toString()
try {
await fs.writeFile(tmp.target, data, { flag: 'wx' })
await moveToDestination(tmp, cache, hash, opts)
} finally {
if (!tmp.moved) {
await fs.rm(tmp.target, { recursive: true, force: true })
}
}
}
return { integrity: sri, size: data.length }
}
module.exports.stream = writeStream
// writes proxied to the 'inputStream' that is passed to the Promise
// 'end' is deferred until content is handled.
class CacacheWriteStream extends Flush {
constructor (cache, opts) {
super()
this.opts = opts
this.cache = cache
this.inputStream = new Minipass()
this.inputStream.on('error', er => this.emit('error', er))
this.inputStream.on('drain', () => this.emit('drain'))
this.handleContentP = null
}
write (chunk, encoding, cb) {
if (!this.handleContentP) {
this.handleContentP = handleContent(
this.inputStream,
this.cache,
this.opts
)
this.handleContentP.catch(error => this.emit('error', error))
}
return this.inputStream.write(chunk, encoding, cb)
}
flush (cb) {
this.inputStream.end(() => {
if (!this.handleContentP) {
const e = new Error('Cache input stream was empty')
e.code = 'ENODATA'
// empty streams are probably emitting end right away.
// defer this one tick by rejecting a promise on it.
return Promise.reject(e).catch(cb)
}
// eslint-disable-next-line promise/catch-or-return
this.handleContentP.then(
(res) => {
res.integrity && this.emit('integrity', res.integrity)
// eslint-disable-next-line promise/always-return
res.size !== null && this.emit('size', res.size)
cb()
},
(er) => cb(er)
)
})
}
}
function writeStream (cache, opts = {}) {
return new CacacheWriteStream(cache, opts)
}
async function handleContent (inputStream, cache, opts) {
const tmp = await makeTmp(cache, opts)
try {
const res = await pipeToTmp(inputStream, cache, tmp.target, opts)
await moveToDestination(
tmp,
cache,
res.integrity,
opts
)
return res
} finally {
if (!tmp.moved) {
await fs.rm(tmp.target, { recursive: true, force: true })
}
}
}
async function pipeToTmp (inputStream, cache, tmpTarget, opts) {
const outStream = new fsm.WriteStream(tmpTarget, {
flags: 'wx',
})
if (opts.integrityEmitter) {
// we need to create these all simultaneously since they can fire in any order
const [integrity, size] = await Promise.all([
events.once(opts.integrityEmitter, 'integrity').then(res => res[0]),
events.once(opts.integrityEmitter, 'size').then(res => res[0]),
new Pipeline(inputStream, outStream).promise(),
])
return { integrity, size }
}
let integrity
let size
const hashStream = ssri.integrityStream({
integrity: opts.integrity,
algorithms: opts.algorithms,
size: opts.size,
})
hashStream.on('integrity', i => {
integrity = i
})
hashStream.on('size', s => {
size = s
})
const pipeline = new Pipeline(inputStream, hashStream, outStream)
await pipeline.promise()
return { integrity, size }
}
async function makeTmp (cache, opts) {
const tmpTarget = uniqueFilename(path.join(cache, 'tmp'), opts.tmpPrefix)
await fs.mkdir(path.dirname(tmpTarget), { recursive: true })
return {
target: tmpTarget,
moved: false,
}
}
async function moveToDestination (tmp, cache, sri) {
const destination = contentPath(cache, sri)
const destDir = path.dirname(destination)
if (moveOperations.has(destination)) {
return moveOperations.get(destination)
}
moveOperations.set(
destination,
fs.mkdir(destDir, { recursive: true })
.then(async () => {
await moveFile(tmp.target, destination, { overwrite: false })
tmp.moved = true
return tmp.moved
})
.catch(err => {
if (!err.message.startsWith('The destination file exists')) {
throw Object.assign(err, { code: 'EEXIST' })
}
}).finally(() => {
moveOperations.delete(destination)
})
)
return moveOperations.get(destination)
}
function sizeError (expected, found) {
/* eslint-disable-next-line max-len */
const err = new Error(`Bad data size: expected inserted data to be ${expected} bytes, but got ${found} instead`)
err.expected = expected
err.found = found
err.code = 'EBADSIZE'
return err
}
function checksumError (expected, found) {
const err = new Error(`Integrity check failed:
Wanted: ${expected}
Found: ${found}`)
err.code = 'EINTEGRITY'
err.expected = expected
err.found = found
return err
}

336
node_modules/cacache/lib/entry-index.js generated vendored Normal file
View File

@@ -0,0 +1,336 @@
'use strict'
const crypto = require('crypto')
const {
appendFile,
mkdir,
readFile,
readdir,
rm,
writeFile,
} = require('fs/promises')
const { Minipass } = require('minipass')
const path = require('path')
const ssri = require('ssri')
const uniqueFilename = require('unique-filename')
const contentPath = require('./content/path')
const hashToSegments = require('./util/hash-to-segments')
const indexV = require('../package.json')['cache-version'].index
const { moveFile } = require('@npmcli/fs')
const lsStreamConcurrency = 5
module.exports.NotFoundError = class NotFoundError extends Error {
constructor (cache, key) {
super(`No cache entry for ${key} found in ${cache}`)
this.code = 'ENOENT'
this.cache = cache
this.key = key
}
}
module.exports.compact = compact
async function compact (cache, key, matchFn, opts = {}) {
const bucket = bucketPath(cache, key)
const entries = await bucketEntries(bucket)
const newEntries = []
// we loop backwards because the bottom-most result is the newest
// since we add new entries with appendFile
for (let i = entries.length - 1; i >= 0; --i) {
const entry = entries[i]
// a null integrity could mean either a delete was appended
// or the user has simply stored an index that does not map
// to any content. we determine if the user wants to keep the
// null integrity based on the validateEntry function passed in options.
// if the integrity is null and no validateEntry is provided, we break
// as we consider the null integrity to be a deletion of everything
// that came before it.
if (entry.integrity === null && !opts.validateEntry) {
break
}
// if this entry is valid, and it is either the first entry or
// the newEntries array doesn't already include an entry that
// matches this one based on the provided matchFn, then we add
// it to the beginning of our list
if ((!opts.validateEntry || opts.validateEntry(entry) === true) &&
(newEntries.length === 0 ||
!newEntries.find((oldEntry) => matchFn(oldEntry, entry)))) {
newEntries.unshift(entry)
}
}
const newIndex = '\n' + newEntries.map((entry) => {
const stringified = JSON.stringify(entry)
const hash = hashEntry(stringified)
return `${hash}\t${stringified}`
}).join('\n')
const setup = async () => {
const target = uniqueFilename(path.join(cache, 'tmp'), opts.tmpPrefix)
await mkdir(path.dirname(target), { recursive: true })
return {
target,
moved: false,
}
}
const teardown = async (tmp) => {
if (!tmp.moved) {
return rm(tmp.target, { recursive: true, force: true })
}
}
const write = async (tmp) => {
await writeFile(tmp.target, newIndex, { flag: 'wx' })
await mkdir(path.dirname(bucket), { recursive: true })
// we use @npmcli/move-file directly here because we
// want to overwrite the existing file
await moveFile(tmp.target, bucket)
tmp.moved = true
}
// write the file atomically
const tmp = await setup()
try {
await write(tmp)
} finally {
await teardown(tmp)
}
// we reverse the list we generated such that the newest
// entries come first in order to make looping through them easier
// the true passed to formatEntry tells it to keep null
// integrity values, if they made it this far it's because
// validateEntry returned true, and as such we should return it
return newEntries.reverse().map((entry) => formatEntry(cache, entry, true))
}
module.exports.insert = insert
async function insert (cache, key, integrity, opts = {}) {
const { metadata, size, time } = opts
const bucket = bucketPath(cache, key)
const entry = {
key,
integrity: integrity && ssri.stringify(integrity),
time: time || Date.now(),
size,
metadata,
}
try {
await mkdir(path.dirname(bucket), { recursive: true })
const stringified = JSON.stringify(entry)
// NOTE - Cleverness ahoy!
//
// This works because it's tremendously unlikely for an entry to corrupt
// another while still preserving the string length of the JSON in
// question. So, we just slap the length in there and verify it on read.
//
// Thanks to @isaacs for the whiteboarding session that ended up with
// this.
await appendFile(bucket, `\n${hashEntry(stringified)}\t${stringified}`)
} catch (err) {
if (err.code === 'ENOENT') {
return undefined
}
throw err
}
return formatEntry(cache, entry)
}
module.exports.find = find
async function find (cache, key) {
const bucket = bucketPath(cache, key)
try {
const entries = await bucketEntries(bucket)
return entries.reduce((latest, next) => {
if (next && next.key === key) {
return formatEntry(cache, next)
} else {
return latest
}
}, null)
} catch (err) {
if (err.code === 'ENOENT') {
return null
} else {
throw err
}
}
}
module.exports.delete = del
function del (cache, key, opts = {}) {
if (!opts.removeFully) {
return insert(cache, key, null, opts)
}
const bucket = bucketPath(cache, key)
return rm(bucket, { recursive: true, force: true })
}
module.exports.lsStream = lsStream
function lsStream (cache) {
const indexDir = bucketDir(cache)
const stream = new Minipass({ objectMode: true })
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const { default: pMap } = await import('p-map')
const buckets = await readdirOrEmpty(indexDir)
await pMap(buckets, async (bucket) => {
const bucketPath = path.join(indexDir, bucket)
const subbuckets = await readdirOrEmpty(bucketPath)
await pMap(subbuckets, async (subbucket) => {
const subbucketPath = path.join(bucketPath, subbucket)
// "/cachename/<bucket 0xFF>/<bucket 0xFF>./*"
const subbucketEntries = await readdirOrEmpty(subbucketPath)
await pMap(subbucketEntries, async (entry) => {
const entryPath = path.join(subbucketPath, entry)
try {
const entries = await bucketEntries(entryPath)
// using a Map here prevents duplicate keys from showing up
// twice, I guess?
const reduced = entries.reduce((acc, entry) => {
acc.set(entry.key, entry)
return acc
}, new Map())
// reduced is a map of key => entry
for (const entry of reduced.values()) {
const formatted = formatEntry(cache, entry)
if (formatted) {
stream.write(formatted)
}
}
} catch (err) {
if (err.code === 'ENOENT') {
return undefined
}
throw err
}
},
{ concurrency: lsStreamConcurrency })
},
{ concurrency: lsStreamConcurrency })
},
{ concurrency: lsStreamConcurrency })
stream.end()
return stream
}).catch(err => stream.emit('error', err))
return stream
}
module.exports.ls = ls
async function ls (cache) {
const entries = await lsStream(cache).collect()
return entries.reduce((acc, xs) => {
acc[xs.key] = xs
return acc
}, {})
}
module.exports.bucketEntries = bucketEntries
async function bucketEntries (bucket, filter) {
const data = await readFile(bucket, 'utf8')
return _bucketEntries(data, filter)
}
function _bucketEntries (data) {
const entries = []
data.split('\n').forEach((entry) => {
if (!entry) {
return
}
const pieces = entry.split('\t')
if (!pieces[1] || hashEntry(pieces[1]) !== pieces[0]) {
// Hash is no good! Corruption or malice? Doesn't matter!
// EJECT EJECT
return
}
let obj
try {
obj = JSON.parse(pieces[1])
} catch (_) {
// eslint-ignore-next-line no-empty-block
}
// coverage disabled here, no need to test with an entry that parses to something falsey
// istanbul ignore else
if (obj) {
entries.push(obj)
}
})
return entries
}
module.exports.bucketDir = bucketDir
function bucketDir (cache) {
return path.join(cache, `index-v${indexV}`)
}
module.exports.bucketPath = bucketPath
function bucketPath (cache, key) {
const hashed = hashKey(key)
return path.join.apply(
path,
[bucketDir(cache)].concat(hashToSegments(hashed))
)
}
module.exports.hashKey = hashKey
function hashKey (key) {
return hash(key, 'sha256')
}
module.exports.hashEntry = hashEntry
function hashEntry (str) {
return hash(str, 'sha1')
}
function hash (str, digest) {
return crypto
.createHash(digest)
.update(str)
.digest('hex')
}
function formatEntry (cache, entry, keepAll) {
// Treat null digests as deletions. They'll shadow any previous entries.
if (!entry.integrity && !keepAll) {
return null
}
return {
key: entry.key,
integrity: entry.integrity,
path: entry.integrity ? contentPath(cache, entry.integrity) : undefined,
size: entry.size,
time: entry.time,
metadata: entry.metadata,
}
}
function readdirOrEmpty (dir) {
return readdir(dir).catch((err) => {
if (err.code === 'ENOENT' || err.code === 'ENOTDIR') {
return []
}
throw err
})
}

170
node_modules/cacache/lib/get.js generated vendored Normal file
View File

@@ -0,0 +1,170 @@
'use strict'
const Collect = require('minipass-collect')
const { Minipass } = require('minipass')
const Pipeline = require('minipass-pipeline')
const index = require('./entry-index')
const memo = require('./memoization')
const read = require('./content/read')
async function getData (cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return {
metadata: memoized.entry.metadata,
data: memoized.data,
integrity: memoized.entry.integrity,
size: memoized.entry.size,
}
}
const entry = await index.find(cache, key, opts)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
const data = await read(cache, entry.integrity, { integrity, size })
if (memoize) {
memo.put(cache, entry, data, opts)
}
return {
data,
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity,
}
}
module.exports = getData
async function getDataByDigest (cache, key, opts = {}) {
const { integrity, memoize, size } = opts
const memoized = memo.get.byDigest(cache, key, opts)
if (memoized && memoize !== false) {
return memoized
}
const res = await read(cache, key, { integrity, size })
if (memoize) {
memo.put.byDigest(cache, key, res, opts)
}
return res
}
module.exports.byDigest = getDataByDigest
const getMemoizedStream = (memoized) => {
const stream = new Minipass()
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(memoized.entry.metadata)
ev === 'integrity' && cb(memoized.entry.integrity)
ev === 'size' && cb(memoized.entry.size)
})
stream.end(memoized.data)
return stream
}
function getStream (cache, key, opts = {}) {
const { memoize, size } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return getMemoizedStream(memoized)
}
const stream = new Pipeline()
// Set all this up to run on the stream and then just return the stream
Promise.resolve().then(async () => {
const entry = await index.find(cache, key)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
stream.emit('metadata', entry.metadata)
stream.emit('integrity', entry.integrity)
stream.emit('size', entry.size)
stream.on('newListener', function (ev, cb) {
ev === 'metadata' && cb(entry.metadata)
ev === 'integrity' && cb(entry.integrity)
ev === 'size' && cb(entry.size)
})
const src = read.readStream(
cache,
entry.integrity,
{ ...opts, size: typeof size !== 'number' ? entry.size : size }
)
if (memoize) {
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put(cache, entry, data, opts))
stream.unshift(memoStream)
}
stream.unshift(src)
return stream
}).catch((err) => stream.emit('error', err))
return stream
}
module.exports.stream = getStream
function getStreamDigest (cache, integrity, opts = {}) {
const { memoize } = opts
const memoized = memo.get.byDigest(cache, integrity, opts)
if (memoized && memoize !== false) {
const stream = new Minipass()
stream.end(memoized)
return stream
} else {
const stream = read.readStream(cache, integrity, opts)
if (!memoize) {
return stream
}
const memoStream = new Collect.PassThrough()
memoStream.on('collect', data => memo.put.byDigest(
cache,
integrity,
data,
opts
))
return new Pipeline(stream, memoStream)
}
}
module.exports.stream.byDigest = getStreamDigest
function info (cache, key, opts = {}) {
const { memoize } = opts
const memoized = memo.get(cache, key, opts)
if (memoized && memoize !== false) {
return Promise.resolve(memoized.entry)
} else {
return index.find(cache, key)
}
}
module.exports.info = info
async function copy (cache, key, dest, opts = {}) {
const entry = await index.find(cache, key, opts)
if (!entry) {
throw new index.NotFoundError(cache, key)
}
await read.copy(cache, entry.integrity, dest, opts)
return {
metadata: entry.metadata,
size: entry.size,
integrity: entry.integrity,
}
}
module.exports.copy = copy
async function copyByDigest (cache, key, dest, opts = {}) {
await read.copy(cache, key, dest, opts)
return key
}
module.exports.copy.byDigest = copyByDigest
module.exports.hasContent = read.hasContent

42
node_modules/cacache/lib/index.js generated vendored Normal file
View File

@@ -0,0 +1,42 @@
'use strict'
const get = require('./get.js')
const put = require('./put.js')
const rm = require('./rm.js')
const verify = require('./verify.js')
const { clearMemoized } = require('./memoization.js')
const tmp = require('./util/tmp.js')
const index = require('./entry-index.js')
module.exports.index = {}
module.exports.index.compact = index.compact
module.exports.index.insert = index.insert
module.exports.ls = index.ls
module.exports.ls.stream = index.lsStream
module.exports.get = get
module.exports.get.byDigest = get.byDigest
module.exports.get.stream = get.stream
module.exports.get.stream.byDigest = get.stream.byDigest
module.exports.get.copy = get.copy
module.exports.get.copy.byDigest = get.copy.byDigest
module.exports.get.info = get.info
module.exports.get.hasContent = get.hasContent
module.exports.put = put
module.exports.put.stream = put.stream
module.exports.rm = rm.entry
module.exports.rm.all = rm.all
module.exports.rm.entry = module.exports.rm
module.exports.rm.content = rm.content
module.exports.clearMemoized = clearMemoized
module.exports.tmp = {}
module.exports.tmp.mkdir = tmp.mkdir
module.exports.tmp.withTmp = tmp.withTmp
module.exports.verify = verify
module.exports.verify.lastRun = verify.lastRun

72
node_modules/cacache/lib/memoization.js generated vendored Normal file
View File

@@ -0,0 +1,72 @@
'use strict'
const { LRUCache } = require('lru-cache')
const MEMOIZED = new LRUCache({
max: 500,
maxSize: 50 * 1024 * 1024, // 50MB
ttl: 3 * 60 * 1000, // 3 minutes
sizeCalculation: (entry, key) => key.startsWith('key:') ? entry.data.length : entry.length,
})
module.exports.clearMemoized = clearMemoized
function clearMemoized () {
const old = {}
MEMOIZED.forEach((v, k) => {
old[k] = v
})
MEMOIZED.clear()
return old
}
module.exports.put = put
function put (cache, entry, data, opts) {
pickMem(opts).set(`key:${cache}:${entry.key}`, { entry, data })
putDigest(cache, entry.integrity, data, opts)
}
module.exports.put.byDigest = putDigest
function putDigest (cache, integrity, data, opts) {
pickMem(opts).set(`digest:${cache}:${integrity}`, data)
}
module.exports.get = get
function get (cache, key, opts) {
return pickMem(opts).get(`key:${cache}:${key}`)
}
module.exports.get.byDigest = getDigest
function getDigest (cache, integrity, opts) {
return pickMem(opts).get(`digest:${cache}:${integrity}`)
}
class ObjProxy {
constructor (obj) {
this.obj = obj
}
get (key) {
return this.obj[key]
}
set (key, val) {
this.obj[key] = val
}
}
function pickMem (opts) {
if (!opts || !opts.memoize) {
return MEMOIZED
} else if (opts.memoize.get && opts.memoize.set) {
return opts.memoize
} else if (typeof opts.memoize === 'object') {
return new ObjProxy(opts.memoize)
} else {
return MEMOIZED
}
}

80
node_modules/cacache/lib/put.js generated vendored Normal file
View File

@@ -0,0 +1,80 @@
'use strict'
const index = require('./entry-index')
const memo = require('./memoization')
const write = require('./content/write')
const Flush = require('minipass-flush')
const { PassThrough } = require('minipass-collect')
const Pipeline = require('minipass-pipeline')
const putOpts = (opts) => ({
algorithms: ['sha512'],
...opts,
})
module.exports = putData
async function putData (cache, key, data, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
const res = await write(cache, data, opts)
const entry = await index.insert(cache, key, res.integrity, { ...opts, size: res.size })
if (memoize) {
memo.put(cache, entry, data, opts)
}
return res.integrity
}
module.exports.stream = putStream
function putStream (cache, key, opts = {}) {
const { memoize } = opts
opts = putOpts(opts)
let integrity
let size
let error
let memoData
const pipeline = new Pipeline()
// first item in the pipeline is the memoizer, because we need
// that to end first and get the collected data.
if (memoize) {
const memoizer = new PassThrough().on('collect', data => {
memoData = data
})
pipeline.push(memoizer)
}
// contentStream is a write-only, not a passthrough
// no data comes out of it.
const contentStream = write.stream(cache, opts)
.on('integrity', (int) => {
integrity = int
})
.on('size', (s) => {
size = s
})
.on('error', (err) => {
error = err
})
pipeline.push(contentStream)
// last but not least, we write the index and emit hash and size,
// and memoize if we're doing that
pipeline.push(new Flush({
async flush () {
if (!error) {
const entry = await index.insert(cache, key, integrity, { ...opts, size })
if (memoize && memoData) {
memo.put(cache, entry, memoData, opts)
}
pipeline.emit('integrity', integrity)
pipeline.emit('size', size)
}
},
}))
return pipeline
}

31
node_modules/cacache/lib/rm.js generated vendored Normal file
View File

@@ -0,0 +1,31 @@
'use strict'
const { rm } = require('fs/promises')
const glob = require('./util/glob.js')
const index = require('./entry-index')
const memo = require('./memoization')
const path = require('path')
const rmContent = require('./content/rm')
module.exports = entry
module.exports.entry = entry
function entry (cache, key, opts) {
memo.clearMemoized()
return index.delete(cache, key, opts)
}
module.exports.content = content
function content (cache, integrity) {
memo.clearMemoized()
return rmContent(cache, integrity)
}
module.exports.all = all
async function all (cache) {
memo.clearMemoized()
const paths = await glob(path.join(cache, '*(content-*|index-*)'), { silent: true, nosort: true })
return Promise.all(paths.map((p) => rm(p, { recursive: true, force: true })))
}

7
node_modules/cacache/lib/util/glob.js generated vendored Normal file
View File

@@ -0,0 +1,7 @@
'use strict'
const { glob } = require('glob')
const path = require('path')
const globify = (pattern) => pattern.split(path.win32.sep).join(path.posix.sep)
module.exports = (path, options) => glob(globify(path), options)

7
node_modules/cacache/lib/util/hash-to-segments.js generated vendored Normal file
View File

@@ -0,0 +1,7 @@
'use strict'
module.exports = hashToSegments
function hashToSegments (hash) {
return [hash.slice(0, 2), hash.slice(2, 4), hash.slice(4)]
}

26
node_modules/cacache/lib/util/tmp.js generated vendored Normal file
View File

@@ -0,0 +1,26 @@
'use strict'
const { withTempDir } = require('@npmcli/fs')
const fs = require('fs/promises')
const path = require('path')
module.exports.mkdir = mktmpdir
async function mktmpdir (cache, opts = {}) {
const { tmpPrefix } = opts
const tmpDir = path.join(cache, 'tmp')
await fs.mkdir(tmpDir, { recursive: true, owner: 'inherit' })
// do not use path.join(), it drops the trailing / if tmpPrefix is unset
const target = `${tmpDir}${path.sep}${tmpPrefix || ''}`
return fs.mkdtemp(target, { owner: 'inherit' })
}
module.exports.withTmp = withTmp
function withTmp (cache, opts, cb) {
if (!cb) {
cb = opts
opts = {}
}
return withTempDir(path.join(cache, 'tmp'), cb, opts)
}

258
node_modules/cacache/lib/verify.js generated vendored Normal file
View File

@@ -0,0 +1,258 @@
'use strict'
const {
mkdir,
readFile,
rm,
stat,
truncate,
writeFile,
} = require('fs/promises')
const contentPath = require('./content/path')
const fsm = require('fs-minipass')
const glob = require('./util/glob.js')
const index = require('./entry-index')
const path = require('path')
const ssri = require('ssri')
const hasOwnProperty = (obj, key) =>
Object.prototype.hasOwnProperty.call(obj, key)
const verifyOpts = (opts) => ({
concurrency: 20,
log: { silly () {} },
...opts,
})
module.exports = verify
async function verify (cache, opts) {
opts = verifyOpts(opts)
opts.log.silly('verify', 'verifying cache at', cache)
const steps = [
markStartTime,
fixPerms,
garbageCollect,
rebuildIndex,
cleanTmp,
writeVerifile,
markEndTime,
]
const stats = {}
for (const step of steps) {
const label = step.name
const start = new Date()
const s = await step(cache, opts)
if (s) {
Object.keys(s).forEach((k) => {
stats[k] = s[k]
})
}
const end = new Date()
if (!stats.runTime) {
stats.runTime = {}
}
stats.runTime[label] = end - start
}
stats.runTime.total = stats.endTime - stats.startTime
opts.log.silly(
'verify',
'verification finished for',
cache,
'in',
`${stats.runTime.total}ms`
)
return stats
}
async function markStartTime () {
return { startTime: new Date() }
}
async function markEndTime () {
return { endTime: new Date() }
}
async function fixPerms (cache, opts) {
opts.log.silly('verify', 'fixing cache permissions')
await mkdir(cache, { recursive: true })
return null
}
// Implements a naive mark-and-sweep tracing garbage collector.
//
// The algorithm is basically as follows:
// 1. Read (and filter) all index entries ("pointers")
// 2. Mark each integrity value as "live"
// 3. Read entire filesystem tree in `content-vX/` dir
// 4. If content is live, verify its checksum and delete it if it fails
// 5. If content is not marked as live, rm it.
//
async function garbageCollect (cache, opts) {
opts.log.silly('verify', 'garbage collecting content')
const { default: pMap } = await import('p-map')
const indexStream = index.lsStream(cache)
const liveContent = new Set()
indexStream.on('data', (entry) => {
if (opts.filter && !opts.filter(entry)) {
return
}
// integrity is stringified, re-parse it so we can get each hash
const integrity = ssri.parse(entry.integrity)
for (const algo in integrity) {
liveContent.add(integrity[algo].toString())
}
})
await new Promise((resolve, reject) => {
indexStream.on('end', resolve).on('error', reject)
})
const contentDir = contentPath.contentDir(cache)
const files = await glob(path.join(contentDir, '**'), {
follow: false,
nodir: true,
nosort: true,
})
const stats = {
verifiedContent: 0,
reclaimedCount: 0,
reclaimedSize: 0,
badContentCount: 0,
keptSize: 0,
}
await pMap(
files,
async (f) => {
const split = f.split(/[/\\]/)
const digest = split.slice(split.length - 3).join('')
const algo = split[split.length - 4]
const integrity = ssri.fromHex(digest, algo)
if (liveContent.has(integrity.toString())) {
const info = await verifyContent(f, integrity)
if (!info.valid) {
stats.reclaimedCount++
stats.badContentCount++
stats.reclaimedSize += info.size
} else {
stats.verifiedContent++
stats.keptSize += info.size
}
} else {
// No entries refer to this content. We can delete.
stats.reclaimedCount++
const s = await stat(f)
await rm(f, { recursive: true, force: true })
stats.reclaimedSize += s.size
}
return stats
},
{ concurrency: opts.concurrency }
)
return stats
}
async function verifyContent (filepath, sri) {
const contentInfo = {}
try {
const { size } = await stat(filepath)
contentInfo.size = size
contentInfo.valid = true
await ssri.checkStream(new fsm.ReadStream(filepath), sri)
} catch (err) {
if (err.code === 'ENOENT') {
return { size: 0, valid: false }
}
if (err.code !== 'EINTEGRITY') {
throw err
}
await rm(filepath, { recursive: true, force: true })
contentInfo.valid = false
}
return contentInfo
}
async function rebuildIndex (cache, opts) {
opts.log.silly('verify', 'rebuilding index')
const { default: pMap } = await import('p-map')
const entries = await index.ls(cache)
const stats = {
missingContent: 0,
rejectedEntries: 0,
totalEntries: 0,
}
const buckets = {}
for (const k in entries) {
/* istanbul ignore else */
if (hasOwnProperty(entries, k)) {
const hashed = index.hashKey(k)
const entry = entries[k]
const excluded = opts.filter && !opts.filter(entry)
excluded && stats.rejectedEntries++
if (buckets[hashed] && !excluded) {
buckets[hashed].push(entry)
} else if (buckets[hashed] && excluded) {
// skip
} else if (excluded) {
buckets[hashed] = []
buckets[hashed]._path = index.bucketPath(cache, k)
} else {
buckets[hashed] = [entry]
buckets[hashed]._path = index.bucketPath(cache, k)
}
}
}
await pMap(
Object.keys(buckets),
(key) => {
return rebuildBucket(cache, buckets[key], stats, opts)
},
{ concurrency: opts.concurrency }
)
return stats
}
async function rebuildBucket (cache, bucket, stats) {
await truncate(bucket._path)
// This needs to be serialized because cacache explicitly
// lets very racy bucket conflicts clobber each other.
for (const entry of bucket) {
const content = contentPath(cache, entry.integrity)
try {
await stat(content)
await index.insert(cache, entry.key, entry.integrity, {
metadata: entry.metadata,
size: entry.size,
time: entry.time,
})
stats.totalEntries++
} catch (err) {
if (err.code === 'ENOENT') {
stats.rejectedEntries++
stats.missingContent++
} else {
throw err
}
}
}
}
function cleanTmp (cache, opts) {
opts.log.silly('verify', 'cleaning tmp directory')
return rm(path.join(cache, 'tmp'), { recursive: true, force: true })
}
async function writeVerifile (cache, opts) {
const verifile = path.join(cache, '_lastverified')
opts.log.silly('verify', 'writing verifile to ' + verifile)
return writeFile(verifile, `${Date.now()}`)
}
module.exports.lastRun = lastRun
async function lastRun (cache) {
const data = await readFile(path.join(cache, '_lastverified'), { encoding: 'utf8' })
return new Date(+data)
}

63
node_modules/cacache/node_modules/chownr/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,63 @@
All packages under `src/` are licensed according to the terms in
their respective `LICENSE` or `LICENSE.md` files.
The remainder of this project is licensed under the Blue Oak
Model License, as follows:
-----
# Blue Oak Model License
Version 1.0.0
## Purpose
This license gives everyone as much permission to work with
this software as possible, while protecting contributors
from liability.
## Acceptance
In order to receive this license, you must agree to its
rules. The rules of this license are both obligations
under that agreement and conditions to your license.
You must not do anything with this software that triggers
a rule that you cannot or will not follow.
## Copyright
Each contributor licenses you to do everything with this
software that would otherwise infringe that contributor's
copyright in it.
## Notices
You must ensure that everyone who gets a copy of
any part of this software from you, with or without
changes, also gets the text of this license or a link to
<https://blueoakcouncil.org/license/1.0.0>.
## Excuse
If anyone notifies you in writing that you have not
complied with [Notices](#notices), you can keep your
license by taking all practical steps to comply within 30
days after the notice. If you do not do so, your license
ends immediately.
## Patent
Each contributor licenses you to do everything with this
software that would otherwise infringe any patent claims
they can license or become able to license.
## Reliability
No contributor can revoke this license.
## No Liability
***As far as the law allows, this software comes as is,
without any warranty or condition, and no contributor
will be liable to anyone for any damages related to this
software or this license, under any kind of legal claim.***

3
node_modules/cacache/node_modules/chownr/README.md generated vendored Normal file
View File

@@ -0,0 +1,3 @@
Like `chown -R`.
Takes the same arguments as `fs.chown()`

View File

@@ -0,0 +1,3 @@
export declare const chownr: (p: string, uid: number, gid: number, cb: (er?: unknown) => any) => void;
export declare const chownrSync: (p: string, uid: number, gid: number) => void;
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AA0CA,eAAO,MAAM,MAAM,MACd,MAAM,OACJ,MAAM,OACN,MAAM,YACD,OAAO,KAAK,GAAG,SA0B1B,CAAA;AAcD,eAAO,MAAM,UAAU,MAAO,MAAM,OAAO,MAAM,OAAO,MAAM,SAiB7D,CAAA"}

View File

@@ -0,0 +1,93 @@
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.chownrSync = exports.chownr = void 0;
const node_fs_1 = __importDefault(require("node:fs"));
const node_path_1 = __importDefault(require("node:path"));
const lchownSync = (path, uid, gid) => {
try {
return node_fs_1.default.lchownSync(path, uid, gid);
}
catch (er) {
if (er?.code !== 'ENOENT')
throw er;
}
};
const chown = (cpath, uid, gid, cb) => {
node_fs_1.default.lchown(cpath, uid, gid, er => {
// Skip ENOENT error
cb(er && er?.code !== 'ENOENT' ? er : null);
});
};
const chownrKid = (p, child, uid, gid, cb) => {
if (child.isDirectory()) {
(0, exports.chownr)(node_path_1.default.resolve(p, child.name), uid, gid, (er) => {
if (er)
return cb(er);
const cpath = node_path_1.default.resolve(p, child.name);
chown(cpath, uid, gid, cb);
});
}
else {
const cpath = node_path_1.default.resolve(p, child.name);
chown(cpath, uid, gid, cb);
}
};
const chownr = (p, uid, gid, cb) => {
node_fs_1.default.readdir(p, { withFileTypes: true }, (er, children) => {
// any error other than ENOTDIR or ENOTSUP means it's not readable,
// or doesn't exist. give up.
if (er) {
if (er.code === 'ENOENT')
return cb();
else if (er.code !== 'ENOTDIR' && er.code !== 'ENOTSUP')
return cb(er);
}
if (er || !children.length)
return chown(p, uid, gid, cb);
let len = children.length;
let errState = null;
const then = (er) => {
/* c8 ignore start */
if (errState)
return;
/* c8 ignore stop */
if (er)
return cb((errState = er));
if (--len === 0)
return chown(p, uid, gid, cb);
};
for (const child of children) {
chownrKid(p, child, uid, gid, then);
}
});
};
exports.chownr = chownr;
const chownrKidSync = (p, child, uid, gid) => {
if (child.isDirectory())
(0, exports.chownrSync)(node_path_1.default.resolve(p, child.name), uid, gid);
lchownSync(node_path_1.default.resolve(p, child.name), uid, gid);
};
const chownrSync = (p, uid, gid) => {
let children;
try {
children = node_fs_1.default.readdirSync(p, { withFileTypes: true });
}
catch (er) {
const e = er;
if (e?.code === 'ENOENT')
return;
else if (e?.code === 'ENOTDIR' || e?.code === 'ENOTSUP')
return lchownSync(p, uid, gid);
else
throw e;
}
for (const child of children) {
chownrKidSync(p, child, uid, gid);
}
return lchownSync(p, uid, gid);
};
exports.chownrSync = chownrSync;
//# sourceMappingURL=index.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "commonjs"
}

View File

@@ -0,0 +1,3 @@
export declare const chownr: (p: string, uid: number, gid: number, cb: (er?: unknown) => any) => void;
export declare const chownrSync: (p: string, uid: number, gid: number) => void;
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AA0CA,eAAO,MAAM,MAAM,MACd,MAAM,OACJ,MAAM,OACN,MAAM,YACD,OAAO,KAAK,GAAG,SA0B1B,CAAA;AAcD,eAAO,MAAM,UAAU,MAAO,MAAM,OAAO,MAAM,OAAO,MAAM,SAiB7D,CAAA"}

View File

@@ -0,0 +1,85 @@
import fs from 'node:fs';
import path from 'node:path';
const lchownSync = (path, uid, gid) => {
try {
return fs.lchownSync(path, uid, gid);
}
catch (er) {
if (er?.code !== 'ENOENT')
throw er;
}
};
const chown = (cpath, uid, gid, cb) => {
fs.lchown(cpath, uid, gid, er => {
// Skip ENOENT error
cb(er && er?.code !== 'ENOENT' ? er : null);
});
};
const chownrKid = (p, child, uid, gid, cb) => {
if (child.isDirectory()) {
chownr(path.resolve(p, child.name), uid, gid, (er) => {
if (er)
return cb(er);
const cpath = path.resolve(p, child.name);
chown(cpath, uid, gid, cb);
});
}
else {
const cpath = path.resolve(p, child.name);
chown(cpath, uid, gid, cb);
}
};
export const chownr = (p, uid, gid, cb) => {
fs.readdir(p, { withFileTypes: true }, (er, children) => {
// any error other than ENOTDIR or ENOTSUP means it's not readable,
// or doesn't exist. give up.
if (er) {
if (er.code === 'ENOENT')
return cb();
else if (er.code !== 'ENOTDIR' && er.code !== 'ENOTSUP')
return cb(er);
}
if (er || !children.length)
return chown(p, uid, gid, cb);
let len = children.length;
let errState = null;
const then = (er) => {
/* c8 ignore start */
if (errState)
return;
/* c8 ignore stop */
if (er)
return cb((errState = er));
if (--len === 0)
return chown(p, uid, gid, cb);
};
for (const child of children) {
chownrKid(p, child, uid, gid, then);
}
});
};
const chownrKidSync = (p, child, uid, gid) => {
if (child.isDirectory())
chownrSync(path.resolve(p, child.name), uid, gid);
lchownSync(path.resolve(p, child.name), uid, gid);
};
export const chownrSync = (p, uid, gid) => {
let children;
try {
children = fs.readdirSync(p, { withFileTypes: true });
}
catch (er) {
const e = er;
if (e?.code === 'ENOENT')
return;
else if (e?.code === 'ENOTDIR' || e?.code === 'ENOTSUP')
return lchownSync(p, uid, gid);
else
throw e;
}
for (const child of children) {
chownrKidSync(p, child, uid, gid);
}
return lchownSync(p, uid, gid);
};
//# sourceMappingURL=index.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "module"
}

69
node_modules/cacache/node_modules/chownr/package.json generated vendored Normal file
View File

@@ -0,0 +1,69 @@
{
"author": "Isaac Z. Schlueter <i@izs.me> (http://blog.izs.me/)",
"name": "chownr",
"description": "like `chown -R`",
"version": "3.0.0",
"repository": {
"type": "git",
"url": "git://github.com/isaacs/chownr.git"
},
"files": [
"dist"
],
"devDependencies": {
"@types/node": "^20.12.5",
"mkdirp": "^3.0.1",
"prettier": "^3.2.5",
"rimraf": "^5.0.5",
"tap": "^18.7.2",
"tshy": "^1.13.1",
"typedoc": "^0.25.12"
},
"scripts": {
"prepare": "tshy",
"pretest": "npm run prepare",
"test": "tap",
"preversion": "npm test",
"postversion": "npm publish",
"prepublishOnly": "git push origin --follow-tags",
"format": "prettier --write . --loglevel warn",
"typedoc": "typedoc --tsconfig .tshy/esm.json ./src/*.ts"
},
"license": "BlueOak-1.0.0",
"engines": {
"node": ">=18"
},
"tshy": {
"exports": {
"./package.json": "./package.json",
".": "./src/index.ts"
}
},
"exports": {
"./package.json": "./package.json",
".": {
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.js"
}
}
},
"main": "./dist/commonjs/index.js",
"types": "./dist/commonjs/index.d.ts",
"type": "module",
"prettier": {
"semi": false,
"printWidth": 75,
"tabWidth": 2,
"useTabs": false,
"singleQuote": true,
"jsxSingleQuote": false,
"bracketSameLine": true,
"arrowParens": "avoid",
"endOfLine": "lf"
}
}

15
node_modules/cacache/node_modules/lru-cache/LICENSE generated vendored Normal file
View File

@@ -0,0 +1,15 @@
The ISC License
Copyright (c) 2010-2023 Isaac Z. Schlueter and Contributors
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR
IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

331
node_modules/cacache/node_modules/lru-cache/README.md generated vendored Normal file
View File

@@ -0,0 +1,331 @@
# lru-cache
A cache object that deletes the least-recently-used items.
Specify a max number of the most recently used items that you
want to keep, and this cache will keep that many of the most
recently accessed items.
This is not primarily a TTL cache, and does not make strong TTL
guarantees. There is no preemptive pruning of expired items by
default, but you _may_ set a TTL on the cache or on a single
`set`. If you do so, it will treat expired items as missing, and
delete them when fetched. If you are more interested in TTL
caching than LRU caching, check out
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
As of version 7, this is one of the most performant LRU
implementations available in JavaScript, and supports a wide
diversity of use cases. However, note that using some of the
features will necessarily impact performance, by causing the
cache to have to do more work. See the "Performance" section
below.
## Installation
```bash
npm install lru-cache --save
```
## Usage
```js
// hybrid module, either works
import { LRUCache } from 'lru-cache'
// or:
const { LRUCache } = require('lru-cache')
// or in minified form for web browsers:
import { LRUCache } from 'http://unpkg.com/lru-cache@9/dist/mjs/index.min.mjs'
// At least one of 'max', 'ttl', or 'maxSize' is required, to prevent
// unsafe unbounded storage.
//
// In most cases, it's best to specify a max for performance, so all
// the required memory allocation is done up-front.
//
// All the other options are optional, see the sections below for
// documentation on what each one does. Most of them can be
// overridden for specific items in get()/set()
const options = {
max: 500,
// for use with tracking overall storage size
maxSize: 5000,
sizeCalculation: (value, key) => {
return 1
},
// for use when you need to clean up something when objects
// are evicted from the cache
dispose: (value, key) => {
freeFromMemoryOrWhatever(value)
},
// how long to live in ms
ttl: 1000 * 60 * 5,
// return stale items before removing from cache?
allowStale: false,
updateAgeOnGet: false,
updateAgeOnHas: false,
// async method to use for cache.fetch(), for
// stale-while-revalidate type of behavior
fetchMethod: async (
key,
staleValue,
{ options, signal, context }
) => {},
}
const cache = new LRUCache(options)
cache.set('key', 'value')
cache.get('key') // "value"
// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)
cache.clear() // empty the cache
```
If you put more stuff in the cache, then less recently used items
will fall out. That's what an LRU cache is.
For full description of the API and all options, please see [the
LRUCache typedocs](https://isaacs.github.io/node-lru-cache/)
## Storage Bounds Safety
This implementation aims to be as flexible as possible, within
the limits of safe memory consumption and optimal performance.
At initial object creation, storage is allocated for `max` items.
If `max` is set to zero, then some performance is lost, and item
count is unbounded. Either `maxSize` or `ttl` _must_ be set if
`max` is not specified.
If `maxSize` is set, then this creates a safe limit on the
maximum storage consumed, but without the performance benefits of
pre-allocation. When `maxSize` is set, every item _must_ provide
a size, either via the `sizeCalculation` method provided to the
constructor, or via a `size` or `sizeCalculation` option provided
to `cache.set()`. The size of every item _must_ be a positive
integer.
If neither `max` nor `maxSize` are set, then `ttl` tracking must
be enabled. Note that, even when tracking item `ttl`, items are
_not_ preemptively deleted when they become stale, unless
`ttlAutopurge` is enabled. Instead, they are only purged the
next time the key is requested. Thus, if `ttlAutopurge`, `max`,
and `maxSize` are all not set, then the cache will potentially
grow unbounded.
In this case, a warning is printed to standard error. Future
versions may require the use of `ttlAutopurge` if `max` and
`maxSize` are not specified.
If you truly wish to use a cache that is bound _only_ by TTL
expiration, consider using a `Map` object, and calling
`setTimeout` to delete entries when they expire. It will perform
much better than an LRU cache.
Here is an implementation you may use, under the same
[license](./LICENSE) as this package:
```js
// a storage-unbounded ttl cache that is not an lru-cache
const cache = {
data: new Map(),
timers: new Map(),
set: (k, v, ttl) => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.set(
k,
setTimeout(() => cache.delete(k), ttl)
)
cache.data.set(k, v)
},
get: k => cache.data.get(k),
has: k => cache.data.has(k),
delete: k => {
if (cache.timers.has(k)) {
clearTimeout(cache.timers.get(k))
}
cache.timers.delete(k)
return cache.data.delete(k)
},
clear: () => {
cache.data.clear()
for (const v of cache.timers.values()) {
clearTimeout(v)
}
cache.timers.clear()
},
}
```
If that isn't to your liking, check out
[@isaacs/ttlcache](http://npm.im/@isaacs/ttlcache).
## Storing Undefined Values
This cache never stores undefined values, as `undefined` is used
internally in a few places to indicate that a key is not in the
cache.
You may call `cache.set(key, undefined)`, but this is just
an alias for `cache.delete(key)`. Note that this has the effect
that `cache.has(key)` will return _false_ after setting it to
undefined.
```js
cache.set(myKey, undefined)
cache.has(myKey) // false!
```
If you need to track `undefined` values, and still note that the
key is in the cache, an easy workaround is to use a sigil object
of your own.
```js
import { LRUCache } from 'lru-cache'
const undefinedValue = Symbol('undefined')
const cache = new LRUCache(...)
const mySet = (key, value) =>
cache.set(key, value === undefined ? undefinedValue : value)
const myGet = (key, value) => {
const v = cache.get(key)
return v === undefinedValue ? undefined : v
}
```
## Performance
As of January 2022, version 7 of this library is one of the most
performant LRU cache implementations in JavaScript.
Benchmarks can be extremely difficult to get right. In
particular, the performance of set/get/delete operations on
objects will vary _wildly_ depending on the type of key used. V8
is highly optimized for objects with keys that are short strings,
especially integer numeric strings. Thus any benchmark which
tests _solely_ using numbers as keys will tend to find that an
object-based approach performs the best.
Note that coercing _anything_ to strings to use as object keys is
unsafe, unless you can be 100% certain that no other type of
value will be used. For example:
```js
const myCache = {}
const set = (k, v) => (myCache[k] = v)
const get = k => myCache[k]
set({}, 'please hang onto this for me')
set('[object Object]', 'oopsie')
```
Also beware of "Just So" stories regarding performance. Garbage
collection of large (especially: deep) object graphs can be
incredibly costly, with several "tipping points" where it
increases exponentially. As a result, putting that off until
later can make it much worse, and less predictable. If a library
performs well, but only in a scenario where the object graph is
kept shallow, then that won't help you if you are using large
objects as keys.
In general, when attempting to use a library to improve
performance (such as a cache like this one), it's best to choose
an option that will perform well in the sorts of scenarios where
you'll actually use it.
This library is optimized for repeated gets and minimizing
eviction time, since that is the expected need of a LRU. Set
operations are somewhat slower on average than a few other
options, in part because of that optimization. It is assumed
that you'll be caching some costly operation, ideally as rarely
as possible, so optimizing set over get would be unwise.
If performance matters to you:
1. If it's at all possible to use small integer values as keys,
and you can guarantee that no other types of values will be
used as keys, then do that, and use a cache such as
[lru-fast](https://npmjs.com/package/lru-fast), or
[mnemonist's
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache)
which uses an Object as its data store.
2. Failing that, if at all possible, use short non-numeric
strings (ie, less than 256 characters) as your keys, and use
[mnemonist's
LRUCache](https://yomguithereal.github.io/mnemonist/lru-cache).
3. If the types of your keys will be anything else, especially
long strings, strings that look like floats, objects, or some
mix of types, or if you aren't sure, then this library will
work well for you.
If you do not need the features that this library provides
(like asynchronous fetching, a variety of TTL staleness
options, and so on), then [mnemonist's
LRUMap](https://yomguithereal.github.io/mnemonist/lru-map) is
a very good option, and just slightly faster than this module
(since it does considerably less).
4. Do not use a `dispose` function, size tracking, or especially
ttl behavior, unless absolutely needed. These features are
convenient, and necessary in some use cases, and every attempt
has been made to make the performance impact minimal, but it
isn't nothing.
## Breaking Changes in Version 7
This library changed to a different algorithm and internal data
structure in version 7, yielding significantly better
performance, albeit with some subtle changes as a result.
If you were relying on the internals of LRUCache in version 6 or
before, it probably will not work in version 7 and above.
## Breaking Changes in Version 8
- The `fetchContext` option was renamed to `context`, and may no
longer be set on the cache instance itself.
- Rewritten in TypeScript, so pretty much all the types moved
around a lot.
- The AbortController/AbortSignal polyfill was removed. For this
reason, **Node version 16.14.0 or higher is now required**.
- Internal properties were moved to actual private class
properties.
- Keys and values must not be `null` or `undefined`.
- Minified export available at `'lru-cache/min'`, for both CJS
and MJS builds.
## Breaking Changes in Version 9
- Named export only, no default export.
- AbortController polyfill returned, albeit with a warning when
used.
## Breaking Changes in Version 10
- `cache.fetch()` return type is now `Promise<V | undefined>`
instead of `Promise<V | void>`. This is an irrelevant change
practically speaking, but can require changes for TypeScript
users.
For more info, see the [change log](CHANGELOG.md).

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "commonjs"
}

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,3 @@
{
"type": "module"
}

View File

@@ -0,0 +1,116 @@
{
"name": "lru-cache",
"publishConfig": {
"tag": "legacy-v10"
},
"description": "A cache object that deletes the least-recently-used items.",
"version": "10.4.3",
"author": "Isaac Z. Schlueter <i@izs.me>",
"keywords": [
"mru",
"lru",
"cache"
],
"sideEffects": false,
"scripts": {
"build": "npm run prepare",
"prepare": "tshy && bash fixup.sh",
"pretest": "npm run prepare",
"presnap": "npm run prepare",
"test": "tap",
"snap": "tap",
"preversion": "npm test",
"postversion": "npm publish",
"prepublishOnly": "git push origin --follow-tags",
"format": "prettier --write .",
"typedoc": "typedoc --tsconfig ./.tshy/esm.json ./src/*.ts",
"benchmark-results-typedoc": "bash scripts/benchmark-results-typedoc.sh",
"prebenchmark": "npm run prepare",
"benchmark": "make -C benchmark",
"preprofile": "npm run prepare",
"profile": "make -C benchmark profile"
},
"main": "./dist/commonjs/index.js",
"types": "./dist/commonjs/index.d.ts",
"tshy": {
"exports": {
".": "./src/index.ts",
"./min": {
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.min.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.min.js"
}
}
}
},
"repository": {
"type": "git",
"url": "git://github.com/isaacs/node-lru-cache.git"
},
"devDependencies": {
"@types/node": "^20.2.5",
"@types/tap": "^15.0.6",
"benchmark": "^2.1.4",
"esbuild": "^0.17.11",
"eslint-config-prettier": "^8.5.0",
"marked": "^4.2.12",
"mkdirp": "^2.1.5",
"prettier": "^2.6.2",
"tap": "^20.0.3",
"tshy": "^2.0.0",
"tslib": "^2.4.0",
"typedoc": "^0.25.3",
"typescript": "^5.2.2"
},
"license": "ISC",
"files": [
"dist"
],
"prettier": {
"semi": false,
"printWidth": 70,
"tabWidth": 2,
"useTabs": false,
"singleQuote": true,
"jsxSingleQuote": false,
"bracketSameLine": true,
"arrowParens": "avoid",
"endOfLine": "lf"
},
"tap": {
"node-arg": [
"--expose-gc"
],
"plugin": [
"@tapjs/clock"
]
},
"exports": {
".": {
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.js"
}
},
"./min": {
"import": {
"types": "./dist/esm/index.d.ts",
"default": "./dist/esm/index.min.js"
},
"require": {
"types": "./dist/commonjs/index.d.ts",
"default": "./dist/commonjs/index.min.js"
}
}
},
"type": "module",
"module": "./dist/esm/index.js"
}

55
node_modules/cacache/node_modules/tar/LICENSE.md generated vendored Normal file
View File

@@ -0,0 +1,55 @@
# Blue Oak Model License
Version 1.0.0
## Purpose
This license gives everyone as much permission to work with
this software as possible, while protecting contributors
from liability.
## Acceptance
In order to receive this license, you must agree to its
rules. The rules of this license are both obligations
under that agreement and conditions to your license.
You must not do anything with this software that triggers
a rule that you cannot or will not follow.
## Copyright
Each contributor licenses you to do everything with this
software that would otherwise infringe that contributor's
copyright in it.
## Notices
You must ensure that everyone who gets a copy of
any part of this software from you, with or without
changes, also gets the text of this license or a link to
<https://blueoakcouncil.org/license/1.0.0>.
## Excuse
If anyone notifies you in writing that you have not
complied with [Notices](#notices), you can keep your
license by taking all practical steps to comply within 30
days after the notice. If you do not do so, your license
ends immediately.
## Patent
Each contributor licenses you to do everything with this
software that would otherwise infringe any patent claims
they can license or become able to license.
## Reliability
No contributor can revoke this license.
## No Liability
***As far as the law allows, this software comes as is,
without any warranty or condition, and no contributor
will be liable to anyone for any damages related to this
software or this license, under any kind of legal claim.***

1145
node_modules/cacache/node_modules/tar/README.md generated vendored Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,3 @@
import { Pack, PackSync } from './pack.js';
export declare const create: import("./make-command.js").TarCommand<Pack, PackSync>;
//# sourceMappingURL=create.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"create.d.ts","sourceRoot":"","sources":["../../src/create.ts"],"names":[],"mappings":"AAWA,OAAO,EAAE,IAAI,EAAE,QAAQ,EAAE,MAAM,WAAW,CAAA;AA8E1C,eAAO,MAAM,MAAM,wDAUlB,CAAA"}

View File

@@ -0,0 +1,83 @@
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.create = void 0;
const fs_minipass_1 = require("@isaacs/fs-minipass");
const node_path_1 = __importDefault(require("node:path"));
const list_js_1 = require("./list.js");
const make_command_js_1 = require("./make-command.js");
const pack_js_1 = require("./pack.js");
const createFileSync = (opt, files) => {
const p = new pack_js_1.PackSync(opt);
const stream = new fs_minipass_1.WriteStreamSync(opt.file, {
mode: opt.mode || 0o666,
});
p.pipe(stream);
addFilesSync(p, files);
};
const createFile = (opt, files) => {
const p = new pack_js_1.Pack(opt);
const stream = new fs_minipass_1.WriteStream(opt.file, {
mode: opt.mode || 0o666,
});
p.pipe(stream);
const promise = new Promise((res, rej) => {
stream.on('error', rej);
stream.on('close', res);
p.on('error', rej);
});
addFilesAsync(p, files);
return promise;
};
const addFilesSync = (p, files) => {
files.forEach(file => {
if (file.charAt(0) === '@') {
(0, list_js_1.list)({
file: node_path_1.default.resolve(p.cwd, file.slice(1)),
sync: true,
noResume: true,
onReadEntry: entry => p.add(entry),
});
}
else {
p.add(file);
}
});
p.end();
};
const addFilesAsync = async (p, files) => {
for (let i = 0; i < files.length; i++) {
const file = String(files[i]);
if (file.charAt(0) === '@') {
await (0, list_js_1.list)({
file: node_path_1.default.resolve(String(p.cwd), file.slice(1)),
noResume: true,
onReadEntry: entry => {
p.add(entry);
},
});
}
else {
p.add(file);
}
}
p.end();
};
const createSync = (opt, files) => {
const p = new pack_js_1.PackSync(opt);
addFilesSync(p, files);
return p;
};
const createAsync = (opt, files) => {
const p = new pack_js_1.Pack(opt);
addFilesAsync(p, files);
return p;
};
exports.create = (0, make_command_js_1.makeCommand)(createFileSync, createFile, createSync, createAsync, (_opt, files) => {
if (!files?.length) {
throw new TypeError('no paths specified to add to archive');
}
});
//# sourceMappingURL=create.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,8 @@
export declare class CwdError extends Error {
path: string;
code: string;
syscall: 'chdir';
constructor(path: string, code: string);
get name(): string;
}
//# sourceMappingURL=cwd-error.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"cwd-error.d.ts","sourceRoot":"","sources":["../../src/cwd-error.ts"],"names":[],"mappings":"AAAA,qBAAa,QAAS,SAAQ,KAAK;IACjC,IAAI,EAAE,MAAM,CAAA;IACZ,IAAI,EAAE,MAAM,CAAA;IACZ,OAAO,EAAE,OAAO,CAAU;gBAEd,IAAI,EAAE,MAAM,EAAE,IAAI,EAAE,MAAM;IAMtC,IAAI,IAAI,WAEP;CACF"}

View File

@@ -0,0 +1,18 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.CwdError = void 0;
class CwdError extends Error {
path;
code;
syscall = 'chdir';
constructor(path, code) {
super(`${code}: Cannot cd into '${path}'`);
this.path = path;
this.code = code;
}
get name() {
return 'CwdError';
}
}
exports.CwdError = CwdError;
//# sourceMappingURL=cwd-error.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"cwd-error.js","sourceRoot":"","sources":["../../src/cwd-error.ts"],"names":[],"mappings":";;;AAAA,MAAa,QAAS,SAAQ,KAAK;IACjC,IAAI,CAAQ;IACZ,IAAI,CAAQ;IACZ,OAAO,GAAY,OAAO,CAAA;IAE1B,YAAY,IAAY,EAAE,IAAY;QACpC,KAAK,CAAC,GAAG,IAAI,qBAAqB,IAAI,GAAG,CAAC,CAAA;QAC1C,IAAI,CAAC,IAAI,GAAG,IAAI,CAAA;QAChB,IAAI,CAAC,IAAI,GAAG,IAAI,CAAA;IAClB,CAAC;IAED,IAAI,IAAI;QACN,OAAO,UAAU,CAAA;IACnB,CAAC;CACF;AAdD,4BAcC","sourcesContent":["export class CwdError extends Error {\n path: string\n code: string\n syscall: 'chdir' = 'chdir'\n\n constructor(path: string, code: string) {\n super(`${code}: Cannot cd into '${path}'`)\n this.path = path\n this.code = code\n }\n\n get name() {\n return 'CwdError'\n }\n}\n"]}

View File

@@ -0,0 +1,3 @@
import { Unpack, UnpackSync } from './unpack.js';
export declare const extract: import("./make-command.js").TarCommand<Unpack, UnpackSync>;
//# sourceMappingURL=extract.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"extract.d.ts","sourceRoot":"","sources":["../../src/extract.ts"],"names":[],"mappings":"AAMA,OAAO,EAAE,MAAM,EAAE,UAAU,EAAE,MAAM,aAAa,CAAA;AA2ChD,eAAO,MAAM,OAAO,4DAQnB,CAAA"}

View File

@@ -0,0 +1,88 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
var desc = Object.getOwnPropertyDescriptor(m, k);
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
desc = { enumerable: true, get: function() { return m[k]; } };
}
Object.defineProperty(o, k2, desc);
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || (function () {
var ownKeys = function(o) {
ownKeys = Object.getOwnPropertyNames || function (o) {
var ar = [];
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
return ar;
};
return ownKeys(o);
};
return function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
__setModuleDefault(result, mod);
return result;
};
})();
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.extract = void 0;
// tar -x
const fsm = __importStar(require("@isaacs/fs-minipass"));
const node_fs_1 = __importDefault(require("node:fs"));
const list_js_1 = require("./list.js");
const make_command_js_1 = require("./make-command.js");
const unpack_js_1 = require("./unpack.js");
const extractFileSync = (opt) => {
const u = new unpack_js_1.UnpackSync(opt);
const file = opt.file;
const stat = node_fs_1.default.statSync(file);
// This trades a zero-byte read() syscall for a stat
// However, it will usually result in less memory allocation
const readSize = opt.maxReadSize || 16 * 1024 * 1024;
const stream = new fsm.ReadStreamSync(file, {
readSize: readSize,
size: stat.size,
});
stream.pipe(u);
};
const extractFile = (opt, _) => {
const u = new unpack_js_1.Unpack(opt);
const readSize = opt.maxReadSize || 16 * 1024 * 1024;
const file = opt.file;
const p = new Promise((resolve, reject) => {
u.on('error', reject);
u.on('close', resolve);
// This trades a zero-byte read() syscall for a stat
// However, it will usually result in less memory allocation
node_fs_1.default.stat(file, (er, stat) => {
if (er) {
reject(er);
}
else {
const stream = new fsm.ReadStream(file, {
readSize: readSize,
size: stat.size,
});
stream.on('error', reject);
stream.pipe(u);
}
});
});
return p;
};
exports.extract = (0, make_command_js_1.makeCommand)(extractFileSync, extractFile, opt => new unpack_js_1.UnpackSync(opt), opt => new unpack_js_1.Unpack(opt), (opt, files) => {
if (files?.length)
(0, list_js_1.filesFilter)(opt, files);
});
//# sourceMappingURL=extract.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"extract.js","sourceRoot":"","sources":["../../src/extract.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAAA,SAAS;AACT,yDAA0C;AAC1C,sDAAwB;AACxB,uCAAuC;AACvC,uDAA+C;AAE/C,2CAAgD;AAEhD,MAAM,eAAe,GAAG,CAAC,GAAuB,EAAE,EAAE;IAClD,MAAM,CAAC,GAAG,IAAI,sBAAU,CAAC,GAAG,CAAC,CAAA;IAC7B,MAAM,IAAI,GAAG,GAAG,CAAC,IAAI,CAAA;IACrB,MAAM,IAAI,GAAG,iBAAE,CAAC,QAAQ,CAAC,IAAI,CAAC,CAAA;IAC9B,oDAAoD;IACpD,4DAA4D;IAC5D,MAAM,QAAQ,GAAG,GAAG,CAAC,WAAW,IAAI,EAAE,GAAG,IAAI,GAAG,IAAI,CAAA;IACpD,MAAM,MAAM,GAAG,IAAI,GAAG,CAAC,cAAc,CAAC,IAAI,EAAE;QAC1C,QAAQ,EAAE,QAAQ;QAClB,IAAI,EAAE,IAAI,CAAC,IAAI;KAChB,CAAC,CAAA;IACF,MAAM,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;AAChB,CAAC,CAAA;AAED,MAAM,WAAW,GAAG,CAAC,GAAmB,EAAE,CAAY,EAAE,EAAE;IACxD,MAAM,CAAC,GAAG,IAAI,kBAAM,CAAC,GAAG,CAAC,CAAA;IACzB,MAAM,QAAQ,GAAG,GAAG,CAAC,WAAW,IAAI,EAAE,GAAG,IAAI,GAAG,IAAI,CAAA;IAEpD,MAAM,IAAI,GAAG,GAAG,CAAC,IAAI,CAAA;IACrB,MAAM,CAAC,GAAG,IAAI,OAAO,CAAO,CAAC,OAAO,EAAE,MAAM,EAAE,EAAE;QAC9C,CAAC,CAAC,EAAE,CAAC,OAAO,EAAE,MAAM,CAAC,CAAA;QACrB,CAAC,CAAC,EAAE,CAAC,OAAO,EAAE,OAAO,CAAC,CAAA;QAEtB,oDAAoD;QACpD,4DAA4D;QAC5D,iBAAE,CAAC,IAAI,CAAC,IAAI,EAAE,CAAC,EAAE,EAAE,IAAI,EAAE,EAAE;YACzB,IAAI,EAAE,EAAE,CAAC;gBACP,MAAM,CAAC,EAAE,CAAC,CAAA;YACZ,CAAC;iBAAM,CAAC;gBACN,MAAM,MAAM,GAAG,IAAI,GAAG,CAAC,UAAU,CAAC,IAAI,EAAE;oBACtC,QAAQ,EAAE,QAAQ;oBAClB,IAAI,EAAE,IAAI,CAAC,IAAI;iBAChB,CAAC,CAAA;gBACF,MAAM,CAAC,EAAE,CAAC,OAAO,EAAE,MAAM,CAAC,CAAA;gBAC1B,MAAM,CAAC,IAAI,CAAC,CAAC,CAAC,CAAA;YAChB,CAAC;QACH,CAAC,CAAC,CAAA;IACJ,CAAC,CAAC,CAAA;IACF,OAAO,CAAC,CAAA;AACV,CAAC,CAAA;AAEY,QAAA,OAAO,GAAG,IAAA,6BAAW,EAChC,eAAe,EACf,WAAW,EACX,GAAG,CAAC,EAAE,CAAC,IAAI,sBAAU,CAAC,GAAG,CAAC,EAC1B,GAAG,CAAC,EAAE,CAAC,IAAI,kBAAM,CAAC,GAAG,CAAC,EACtB,CAAC,GAAG,EAAE,KAAK,EAAE,EAAE;IACb,IAAI,KAAK,EAAE,MAAM;QAAE,IAAA,qBAAW,EAAC,GAAG,EAAE,KAAK,CAAC,CAAA;AAC5C,CAAC,CACF,CAAA","sourcesContent":["// tar -x\nimport * as fsm from '@isaacs/fs-minipass'\nimport fs from 'node:fs'\nimport { filesFilter } from './list.js'\nimport { makeCommand } from './make-command.js'\nimport { TarOptionsFile, TarOptionsSyncFile } from './options.js'\nimport { Unpack, UnpackSync } from './unpack.js'\n\nconst extractFileSync = (opt: TarOptionsSyncFile) => {\n const u = new UnpackSync(opt)\n const file = opt.file\n const stat = fs.statSync(file)\n // This trades a zero-byte read() syscall for a stat\n // However, it will usually result in less memory allocation\n const readSize = opt.maxReadSize || 16 * 1024 * 1024\n const stream = new fsm.ReadStreamSync(file, {\n readSize: readSize,\n size: stat.size,\n })\n stream.pipe(u)\n}\n\nconst extractFile = (opt: TarOptionsFile, _?: string[]) => {\n const u = new Unpack(opt)\n const readSize = opt.maxReadSize || 16 * 1024 * 1024\n\n const file = opt.file\n const p = new Promise<void>((resolve, reject) => {\n u.on('error', reject)\n u.on('close', resolve)\n\n // This trades a zero-byte read() syscall for a stat\n // However, it will usually result in less memory allocation\n fs.stat(file, (er, stat) => {\n if (er) {\n reject(er)\n } else {\n const stream = new fsm.ReadStream(file, {\n readSize: readSize,\n size: stat.size,\n })\n stream.on('error', reject)\n stream.pipe(u)\n }\n })\n })\n return p\n}\n\nexport const extract = makeCommand<Unpack, UnpackSync>(\n extractFileSync,\n extractFile,\n opt => new UnpackSync(opt),\n opt => new Unpack(opt),\n (opt, files) => {\n if (files?.length) filesFilter(opt, files)\n },\n)\n"]}

View File

@@ -0,0 +1,2 @@
export declare const getWriteFlag: (() => string) | ((size: number) => number | "w");
//# sourceMappingURL=get-write-flag.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"get-write-flag.d.ts","sourceRoot":"","sources":["../../src/get-write-flag.ts"],"names":[],"mappings":"AAwBA,eAAO,MAAM,YAAY,2BAGd,MAAM,kBAAwC,CAAA"}

View File

@@ -0,0 +1,29 @@
"use strict";
// Get the appropriate flag to use for creating files
// We use fmap on Windows platforms for files less than
// 512kb. This is a fairly low limit, but avoids making
// things slower in some cases. Since most of what this
// library is used for is extracting tarballs of many
// relatively small files in npm packages and the like,
// it can be a big boost on Windows platforms.
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.getWriteFlag = void 0;
const fs_1 = __importDefault(require("fs"));
const platform = process.env.__FAKE_PLATFORM__ || process.platform;
const isWindows = platform === 'win32';
/* c8 ignore start */
const { O_CREAT, O_TRUNC, O_WRONLY } = fs_1.default.constants;
const UV_FS_O_FILEMAP = Number(process.env.__FAKE_FS_O_FILENAME__) ||
fs_1.default.constants.UV_FS_O_FILEMAP ||
0;
/* c8 ignore stop */
const fMapEnabled = isWindows && !!UV_FS_O_FILEMAP;
const fMapLimit = 512 * 1024;
const fMapFlag = UV_FS_O_FILEMAP | O_TRUNC | O_CREAT | O_WRONLY;
exports.getWriteFlag = !fMapEnabled ?
() => 'w'
: (size) => (size < fMapLimit ? fMapFlag : 'w');
//# sourceMappingURL=get-write-flag.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"get-write-flag.js","sourceRoot":"","sources":["../../src/get-write-flag.ts"],"names":[],"mappings":";AAAA,qDAAqD;AACrD,uDAAuD;AACvD,wDAAwD;AACxD,wDAAwD;AACxD,qDAAqD;AACrD,uDAAuD;AACvD,8CAA8C;;;;;;AAE9C,4CAAmB;AAEnB,MAAM,QAAQ,GAAG,OAAO,CAAC,GAAG,CAAC,iBAAiB,IAAI,OAAO,CAAC,QAAQ,CAAA;AAClE,MAAM,SAAS,GAAG,QAAQ,KAAK,OAAO,CAAA;AAEtC,qBAAqB;AACrB,MAAM,EAAE,OAAO,EAAE,OAAO,EAAE,QAAQ,EAAE,GAAG,YAAE,CAAC,SAAS,CAAA;AACnD,MAAM,eAAe,GACnB,MAAM,CAAC,OAAO,CAAC,GAAG,CAAC,sBAAsB,CAAC;IAC1C,YAAE,CAAC,SAAS,CAAC,eAAe;IAC5B,CAAC,CAAA;AACH,oBAAoB;AAEpB,MAAM,WAAW,GAAG,SAAS,IAAI,CAAC,CAAC,eAAe,CAAA;AAClD,MAAM,SAAS,GAAG,GAAG,GAAG,IAAI,CAAA;AAC5B,MAAM,QAAQ,GAAG,eAAe,GAAG,OAAO,GAAG,OAAO,GAAG,QAAQ,CAAA;AAClD,QAAA,YAAY,GACvB,CAAC,WAAW,CAAC,CAAC;IACZ,GAAG,EAAE,CAAC,GAAG;IACX,CAAC,CAAC,CAAC,IAAY,EAAE,EAAE,CAAC,CAAC,IAAI,GAAG,SAAS,CAAC,CAAC,CAAC,QAAQ,CAAC,CAAC,CAAC,GAAG,CAAC,CAAA","sourcesContent":["// Get the appropriate flag to use for creating files\n// We use fmap on Windows platforms for files less than\n// 512kb. This is a fairly low limit, but avoids making\n// things slower in some cases. Since most of what this\n// library is used for is extracting tarballs of many\n// relatively small files in npm packages and the like,\n// it can be a big boost on Windows platforms.\n\nimport fs from 'fs'\n\nconst platform = process.env.__FAKE_PLATFORM__ || process.platform\nconst isWindows = platform === 'win32'\n\n/* c8 ignore start */\nconst { O_CREAT, O_TRUNC, O_WRONLY } = fs.constants\nconst UV_FS_O_FILEMAP =\n Number(process.env.__FAKE_FS_O_FILENAME__) ||\n fs.constants.UV_FS_O_FILEMAP ||\n 0\n/* c8 ignore stop */\n\nconst fMapEnabled = isWindows && !!UV_FS_O_FILEMAP\nconst fMapLimit = 512 * 1024\nconst fMapFlag = UV_FS_O_FILEMAP | O_TRUNC | O_CREAT | O_WRONLY\nexport const getWriteFlag =\n !fMapEnabled ?\n () => 'w'\n : (size: number) => (size < fMapLimit ? fMapFlag : 'w')\n"]}

View File

@@ -0,0 +1,53 @@
import type { EntryTypeCode, EntryTypeName } from './types.js';
export type HeaderData = {
path?: string;
mode?: number;
uid?: number;
gid?: number;
size?: number;
cksum?: number;
type?: EntryTypeName | 'Unsupported';
linkpath?: string;
uname?: string;
gname?: string;
devmaj?: number;
devmin?: number;
atime?: Date;
ctime?: Date;
mtime?: Date;
charset?: string;
comment?: string;
dev?: number;
ino?: number;
nlink?: number;
};
export declare class Header implements HeaderData {
#private;
cksumValid: boolean;
needPax: boolean;
nullBlock: boolean;
block?: Buffer;
path?: string;
mode?: number;
uid?: number;
gid?: number;
size?: number;
cksum?: number;
linkpath?: string;
uname?: string;
gname?: string;
devmaj: number;
devmin: number;
atime?: Date;
ctime?: Date;
mtime?: Date;
charset?: string;
comment?: string;
constructor(data?: Buffer | HeaderData, off?: number, ex?: HeaderData, gex?: HeaderData);
decode(buf: Buffer, off: number, ex?: HeaderData, gex?: HeaderData): void;
encode(buf?: Buffer, off?: number): boolean;
get type(): EntryTypeName;
get typeKey(): EntryTypeCode | 'Unsupported';
set type(type: EntryTypeCode | EntryTypeName | 'Unsupported');
}
//# sourceMappingURL=header.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"header.d.ts","sourceRoot":"","sources":["../../src/header.ts"],"names":[],"mappings":"AAOA,OAAO,KAAK,EAAE,aAAa,EAAE,aAAa,EAAE,MAAM,YAAY,CAAA;AAG9D,MAAM,MAAM,UAAU,GAAG;IACvB,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,IAAI,CAAC,EAAE,aAAa,GAAG,aAAa,CAAA;IACpC,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,MAAM,CAAC,EAAE,MAAM,CAAA;IACf,MAAM,CAAC,EAAE,MAAM,CAAA;IACf,KAAK,CAAC,EAAE,IAAI,CAAA;IACZ,KAAK,CAAC,EAAE,IAAI,CAAA;IACZ,KAAK,CAAC,EAAE,IAAI,CAAA;IAIZ,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,KAAK,CAAC,EAAE,MAAM,CAAA;CACf,CAAA;AAED,qBAAa,MAAO,YAAW,UAAU;;IACvC,UAAU,EAAE,OAAO,CAAQ;IAC3B,OAAO,EAAE,OAAO,CAAQ;IACxB,SAAS,EAAE,OAAO,CAAQ;IAE1B,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,IAAI,CAAC,EAAE,MAAM,CAAA;IACb,KAAK,CAAC,EAAE,MAAM,CAAA;IAEd,QAAQ,CAAC,EAAE,MAAM,CAAA;IACjB,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,MAAM,EAAE,MAAM,CAAI;IAClB,MAAM,EAAE,MAAM,CAAI;IAClB,KAAK,CAAC,EAAE,IAAI,CAAA;IACZ,KAAK,CAAC,EAAE,IAAI,CAAA;IACZ,KAAK,CAAC,EAAE,IAAI,CAAA;IAEZ,OAAO,CAAC,EAAE,MAAM,CAAA;IAChB,OAAO,CAAC,EAAE,MAAM,CAAA;gBAGd,IAAI,CAAC,EAAE,MAAM,GAAG,UAAU,EAC1B,GAAG,GAAE,MAAU,EACf,EAAE,CAAC,EAAE,UAAU,EACf,GAAG,CAAC,EAAE,UAAU;IASlB,MAAM,CACJ,GAAG,EAAE,MAAM,EACX,GAAG,EAAE,MAAM,EACX,EAAE,CAAC,EAAE,UAAU,EACf,GAAG,CAAC,EAAE,UAAU;IA+GlB,MAAM,CAAC,GAAG,CAAC,EAAE,MAAM,EAAE,GAAG,GAAE,MAAU;IAwEpC,IAAI,IAAI,IAAI,aAAa,CAKxB;IAED,IAAI,OAAO,IAAI,aAAa,GAAG,aAAa,CAE3C;IAED,IAAI,IAAI,CAAC,IAAI,EAAE,aAAa,GAAG,aAAa,GAAG,aAAa,EAS3D;CACF"}

View File

@@ -0,0 +1,325 @@
"use strict";
// parse a 512-byte header block to a data object, or vice-versa
// encode returns `true` if a pax extended header is needed, because
// the data could not be faithfully encoded in a simple header.
// (Also, check header.needPax to see if it needs a pax header.)
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
var desc = Object.getOwnPropertyDescriptor(m, k);
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
desc = { enumerable: true, get: function() { return m[k]; } };
}
Object.defineProperty(o, k2, desc);
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || (function () {
var ownKeys = function(o) {
ownKeys = Object.getOwnPropertyNames || function (o) {
var ar = [];
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
return ar;
};
return ownKeys(o);
};
return function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
__setModuleDefault(result, mod);
return result;
};
})();
Object.defineProperty(exports, "__esModule", { value: true });
exports.Header = void 0;
const node_path_1 = require("node:path");
const large = __importStar(require("./large-numbers.js"));
const types = __importStar(require("./types.js"));
class Header {
cksumValid = false;
needPax = false;
nullBlock = false;
block;
path;
mode;
uid;
gid;
size;
cksum;
#type = 'Unsupported';
linkpath;
uname;
gname;
devmaj = 0;
devmin = 0;
atime;
ctime;
mtime;
charset;
comment;
constructor(data, off = 0, ex, gex) {
if (Buffer.isBuffer(data)) {
this.decode(data, off || 0, ex, gex);
}
else if (data) {
this.#slurp(data);
}
}
decode(buf, off, ex, gex) {
if (!off) {
off = 0;
}
if (!buf || !(buf.length >= off + 512)) {
throw new Error('need 512 bytes for header');
}
this.path = ex?.path ?? decString(buf, off, 100);
this.mode = ex?.mode ?? gex?.mode ?? decNumber(buf, off + 100, 8);
this.uid = ex?.uid ?? gex?.uid ?? decNumber(buf, off + 108, 8);
this.gid = ex?.gid ?? gex?.gid ?? decNumber(buf, off + 116, 8);
this.size = ex?.size ?? gex?.size ?? decNumber(buf, off + 124, 12);
this.mtime =
ex?.mtime ?? gex?.mtime ?? decDate(buf, off + 136, 12);
this.cksum = decNumber(buf, off + 148, 12);
// if we have extended or global extended headers, apply them now
// See https://github.com/npm/node-tar/pull/187
// Apply global before local, so it overrides
if (gex)
this.#slurp(gex, true);
if (ex)
this.#slurp(ex);
// old tar versions marked dirs as a file with a trailing /
const t = decString(buf, off + 156, 1);
if (types.isCode(t)) {
this.#type = t || '0';
}
if (this.#type === '0' && this.path.slice(-1) === '/') {
this.#type = '5';
}
// tar implementations sometimes incorrectly put the stat(dir).size
// as the size in the tarball, even though Directory entries are
// not able to have any body at all. In the very rare chance that
// it actually DOES have a body, we weren't going to do anything with
// it anyway, and it'll just be a warning about an invalid header.
if (this.#type === '5') {
this.size = 0;
}
this.linkpath = decString(buf, off + 157, 100);
if (buf.subarray(off + 257, off + 265).toString() ===
'ustar\u000000') {
/* c8 ignore start */
this.uname =
ex?.uname ?? gex?.uname ?? decString(buf, off + 265, 32);
this.gname =
ex?.gname ?? gex?.gname ?? decString(buf, off + 297, 32);
this.devmaj =
ex?.devmaj ?? gex?.devmaj ?? decNumber(buf, off + 329, 8) ?? 0;
this.devmin =
ex?.devmin ?? gex?.devmin ?? decNumber(buf, off + 337, 8) ?? 0;
/* c8 ignore stop */
if (buf[off + 475] !== 0) {
// definitely a prefix, definitely >130 chars.
const prefix = decString(buf, off + 345, 155);
this.path = prefix + '/' + this.path;
}
else {
const prefix = decString(buf, off + 345, 130);
if (prefix) {
this.path = prefix + '/' + this.path;
}
/* c8 ignore start */
this.atime =
ex?.atime ?? gex?.atime ?? decDate(buf, off + 476, 12);
this.ctime =
ex?.ctime ?? gex?.ctime ?? decDate(buf, off + 488, 12);
/* c8 ignore stop */
}
}
let sum = 8 * 0x20;
for (let i = off; i < off + 148; i++) {
sum += buf[i];
}
for (let i = off + 156; i < off + 512; i++) {
sum += buf[i];
}
this.cksumValid = sum === this.cksum;
if (this.cksum === undefined && sum === 8 * 0x20) {
this.nullBlock = true;
}
}
#slurp(ex, gex = false) {
Object.assign(this, Object.fromEntries(Object.entries(ex).filter(([k, v]) => {
// we slurp in everything except for the path attribute in
// a global extended header, because that's weird. Also, any
// null/undefined values are ignored.
return !(v === null ||
v === undefined ||
(k === 'path' && gex) ||
(k === 'linkpath' && gex) ||
k === 'global');
})));
}
encode(buf, off = 0) {
if (!buf) {
buf = this.block = Buffer.alloc(512);
}
if (this.#type === 'Unsupported') {
this.#type = '0';
}
if (!(buf.length >= off + 512)) {
throw new Error('need 512 bytes for header');
}
const prefixSize = this.ctime || this.atime ? 130 : 155;
const split = splitPrefix(this.path || '', prefixSize);
const path = split[0];
const prefix = split[1];
this.needPax = !!split[2];
this.needPax = encString(buf, off, 100, path) || this.needPax;
this.needPax =
encNumber(buf, off + 100, 8, this.mode) || this.needPax;
this.needPax =
encNumber(buf, off + 108, 8, this.uid) || this.needPax;
this.needPax =
encNumber(buf, off + 116, 8, this.gid) || this.needPax;
this.needPax =
encNumber(buf, off + 124, 12, this.size) || this.needPax;
this.needPax =
encDate(buf, off + 136, 12, this.mtime) || this.needPax;
buf[off + 156] = this.#type.charCodeAt(0);
this.needPax =
encString(buf, off + 157, 100, this.linkpath) || this.needPax;
buf.write('ustar\u000000', off + 257, 8);
this.needPax =
encString(buf, off + 265, 32, this.uname) || this.needPax;
this.needPax =
encString(buf, off + 297, 32, this.gname) || this.needPax;
this.needPax =
encNumber(buf, off + 329, 8, this.devmaj) || this.needPax;
this.needPax =
encNumber(buf, off + 337, 8, this.devmin) || this.needPax;
this.needPax =
encString(buf, off + 345, prefixSize, prefix) || this.needPax;
if (buf[off + 475] !== 0) {
this.needPax =
encString(buf, off + 345, 155, prefix) || this.needPax;
}
else {
this.needPax =
encString(buf, off + 345, 130, prefix) || this.needPax;
this.needPax =
encDate(buf, off + 476, 12, this.atime) || this.needPax;
this.needPax =
encDate(buf, off + 488, 12, this.ctime) || this.needPax;
}
let sum = 8 * 0x20;
for (let i = off; i < off + 148; i++) {
sum += buf[i];
}
for (let i = off + 156; i < off + 512; i++) {
sum += buf[i];
}
this.cksum = sum;
encNumber(buf, off + 148, 8, this.cksum);
this.cksumValid = true;
return this.needPax;
}
get type() {
return (this.#type === 'Unsupported' ?
this.#type
: types.name.get(this.#type));
}
get typeKey() {
return this.#type;
}
set type(type) {
const c = String(types.code.get(type));
if (types.isCode(c) || c === 'Unsupported') {
this.#type = c;
}
else if (types.isCode(type)) {
this.#type = type;
}
else {
throw new TypeError('invalid entry type: ' + type);
}
}
}
exports.Header = Header;
const splitPrefix = (p, prefixSize) => {
const pathSize = 100;
let pp = p;
let prefix = '';
let ret = undefined;
const root = node_path_1.posix.parse(p).root || '.';
if (Buffer.byteLength(pp) < pathSize) {
ret = [pp, prefix, false];
}
else {
// first set prefix to the dir, and path to the base
prefix = node_path_1.posix.dirname(pp);
pp = node_path_1.posix.basename(pp);
do {
if (Buffer.byteLength(pp) <= pathSize &&
Buffer.byteLength(prefix) <= prefixSize) {
// both fit!
ret = [pp, prefix, false];
}
else if (Buffer.byteLength(pp) > pathSize &&
Buffer.byteLength(prefix) <= prefixSize) {
// prefix fits in prefix, but path doesn't fit in path
ret = [pp.slice(0, pathSize - 1), prefix, true];
}
else {
// make path take a bit from prefix
pp = node_path_1.posix.join(node_path_1.posix.basename(prefix), pp);
prefix = node_path_1.posix.dirname(prefix);
}
} while (prefix !== root && ret === undefined);
// at this point, found no resolution, just truncate
if (!ret) {
ret = [p.slice(0, pathSize - 1), '', true];
}
}
return ret;
};
const decString = (buf, off, size) => buf
.subarray(off, off + size)
.toString('utf8')
.replace(/\0.*/, '');
const decDate = (buf, off, size) => numToDate(decNumber(buf, off, size));
const numToDate = (num) => num === undefined ? undefined : new Date(num * 1000);
const decNumber = (buf, off, size) => Number(buf[off]) & 0x80 ?
large.parse(buf.subarray(off, off + size))
: decSmallNumber(buf, off, size);
const nanUndef = (value) => (isNaN(value) ? undefined : value);
const decSmallNumber = (buf, off, size) => nanUndef(parseInt(buf
.subarray(off, off + size)
.toString('utf8')
.replace(/\0.*$/, '')
.trim(), 8));
// the maximum encodable as a null-terminated octal, by field size
const MAXNUM = {
12: 0o77777777777,
8: 0o7777777,
};
const encNumber = (buf, off, size, num) => num === undefined ? false
: num > MAXNUM[size] || num < 0 ?
(large.encode(num, buf.subarray(off, off + size)), true)
: (encSmallNumber(buf, off, size, num), false);
const encSmallNumber = (buf, off, size, num) => buf.write(octalString(num, size), off, size, 'ascii');
const octalString = (num, size) => padOctal(Math.floor(num).toString(8), size);
const padOctal = (str, size) => (str.length === size - 1 ?
str
: new Array(size - str.length - 1).join('0') + str + ' ') + '\0';
const encDate = (buf, off, size, date) => date === undefined ? false : (encNumber(buf, off, size, date.getTime() / 1000));
// enough to fill the longest string we've got
const NULLS = new Array(156).join('\0');
// pad with nulls, return true if it's longer or non-ascii
const encString = (buf, off, size, str) => str === undefined ? false : ((buf.write(str + NULLS, off, size, 'utf8'),
str.length !== Buffer.byteLength(str) || str.length > size));
//# sourceMappingURL=header.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,20 @@
export { type TarOptionsWithAliasesAsync, type TarOptionsWithAliasesAsyncFile, type TarOptionsWithAliasesAsyncNoFile, type TarOptionsWithAliasesSyncNoFile, type TarOptionsWithAliases, type TarOptionsWithAliasesFile, type TarOptionsWithAliasesSync, type TarOptionsWithAliasesSyncFile, } from './options.js';
export * from './create.js';
export { create as c } from './create.js';
export * from './extract.js';
export { extract as x } from './extract.js';
export * from './header.js';
export * from './list.js';
export { list as t } from './list.js';
export * from './pack.js';
export * from './parse.js';
export * from './pax.js';
export * from './read-entry.js';
export * from './replace.js';
export { replace as r } from './replace.js';
export * as types from './types.js';
export * from './unpack.js';
export * from './update.js';
export { update as u } from './update.js';
export * from './write-entry.js';
//# sourceMappingURL=index.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.d.ts","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":"AAAA,OAAO,EACL,KAAK,0BAA0B,EAC/B,KAAK,8BAA8B,EACnC,KAAK,gCAAgC,EACrC,KAAK,+BAA+B,EACpC,KAAK,qBAAqB,EAC1B,KAAK,yBAAyB,EAC9B,KAAK,yBAAyB,EAC9B,KAAK,6BAA6B,GACnC,MAAM,cAAc,CAAA;AAErB,cAAc,aAAa,CAAA;AAC3B,OAAO,EAAE,MAAM,IAAI,CAAC,EAAE,MAAM,aAAa,CAAA;AACzC,cAAc,cAAc,CAAA;AAC5B,OAAO,EAAE,OAAO,IAAI,CAAC,EAAE,MAAM,cAAc,CAAA;AAC3C,cAAc,aAAa,CAAA;AAC3B,cAAc,WAAW,CAAA;AACzB,OAAO,EAAE,IAAI,IAAI,CAAC,EAAE,MAAM,WAAW,CAAA;AAErC,cAAc,WAAW,CAAA;AACzB,cAAc,YAAY,CAAA;AAC1B,cAAc,UAAU,CAAA;AACxB,cAAc,iBAAiB,CAAA;AAC/B,cAAc,cAAc,CAAA;AAC5B,OAAO,EAAE,OAAO,IAAI,CAAC,EAAE,MAAM,cAAc,CAAA;AAC3C,OAAO,KAAK,KAAK,MAAM,YAAY,CAAA;AACnC,cAAc,aAAa,CAAA;AAC3B,cAAc,aAAa,CAAA;AAC3B,OAAO,EAAE,MAAM,IAAI,CAAC,EAAE,MAAM,aAAa,CAAA;AACzC,cAAc,kBAAkB,CAAA"}

View File

@@ -0,0 +1,64 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
var desc = Object.getOwnPropertyDescriptor(m, k);
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
desc = { enumerable: true, get: function() { return m[k]; } };
}
Object.defineProperty(o, k2, desc);
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __exportStar = (this && this.__exportStar) || function(m, exports) {
for (var p in m) if (p !== "default" && !Object.prototype.hasOwnProperty.call(exports, p)) __createBinding(exports, m, p);
};
var __importStar = (this && this.__importStar) || (function () {
var ownKeys = function(o) {
ownKeys = Object.getOwnPropertyNames || function (o) {
var ar = [];
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
return ar;
};
return ownKeys(o);
};
return function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
__setModuleDefault(result, mod);
return result;
};
})();
Object.defineProperty(exports, "__esModule", { value: true });
exports.u = exports.types = exports.r = exports.t = exports.x = exports.c = void 0;
__exportStar(require("./create.js"), exports);
var create_js_1 = require("./create.js");
Object.defineProperty(exports, "c", { enumerable: true, get: function () { return create_js_1.create; } });
__exportStar(require("./extract.js"), exports);
var extract_js_1 = require("./extract.js");
Object.defineProperty(exports, "x", { enumerable: true, get: function () { return extract_js_1.extract; } });
__exportStar(require("./header.js"), exports);
__exportStar(require("./list.js"), exports);
var list_js_1 = require("./list.js");
Object.defineProperty(exports, "t", { enumerable: true, get: function () { return list_js_1.list; } });
// classes
__exportStar(require("./pack.js"), exports);
__exportStar(require("./parse.js"), exports);
__exportStar(require("./pax.js"), exports);
__exportStar(require("./read-entry.js"), exports);
__exportStar(require("./replace.js"), exports);
var replace_js_1 = require("./replace.js");
Object.defineProperty(exports, "r", { enumerable: true, get: function () { return replace_js_1.replace; } });
exports.types = __importStar(require("./types.js"));
__exportStar(require("./unpack.js"), exports);
__exportStar(require("./update.js"), exports);
var update_js_1 = require("./update.js");
Object.defineProperty(exports, "u", { enumerable: true, get: function () { return update_js_1.update; } });
__exportStar(require("./write-entry.js"), exports);
//# sourceMappingURL=index.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"index.js","sourceRoot":"","sources":["../../src/index.ts"],"names":[],"mappings":";;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;AAWA,8CAA2B;AAC3B,yCAAyC;AAAhC,8FAAA,MAAM,OAAK;AACpB,+CAA4B;AAC5B,2CAA2C;AAAlC,+FAAA,OAAO,OAAK;AACrB,8CAA2B;AAC3B,4CAAyB;AACzB,qCAAqC;AAA5B,4FAAA,IAAI,OAAK;AAClB,UAAU;AACV,4CAAyB;AACzB,6CAA0B;AAC1B,2CAAwB;AACxB,kDAA+B;AAC/B,+CAA4B;AAC5B,2CAA2C;AAAlC,+FAAA,OAAO,OAAK;AACrB,oDAAmC;AACnC,8CAA2B;AAC3B,8CAA2B;AAC3B,yCAAyC;AAAhC,8FAAA,MAAM,OAAK;AACpB,mDAAgC","sourcesContent":["export {\n type TarOptionsWithAliasesAsync,\n type TarOptionsWithAliasesAsyncFile,\n type TarOptionsWithAliasesAsyncNoFile,\n type TarOptionsWithAliasesSyncNoFile,\n type TarOptionsWithAliases,\n type TarOptionsWithAliasesFile,\n type TarOptionsWithAliasesSync,\n type TarOptionsWithAliasesSyncFile,\n} from './options.js'\n\nexport * from './create.js'\nexport { create as c } from './create.js'\nexport * from './extract.js'\nexport { extract as x } from './extract.js'\nexport * from './header.js'\nexport * from './list.js'\nexport { list as t } from './list.js'\n// classes\nexport * from './pack.js'\nexport * from './parse.js'\nexport * from './pax.js'\nexport * from './read-entry.js'\nexport * from './replace.js'\nexport { replace as r } from './replace.js'\nexport * as types from './types.js'\nexport * from './unpack.js'\nexport * from './update.js'\nexport { update as u } from './update.js'\nexport * from './write-entry.js'\n"]}

View File

@@ -0,0 +1,3 @@
export declare const encode: (num: number, buf: Buffer) => Buffer<ArrayBufferLike>;
export declare const parse: (buf: Buffer) => number;
//# sourceMappingURL=large-numbers.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"large-numbers.d.ts","sourceRoot":"","sources":["../../src/large-numbers.ts"],"names":[],"mappings":"AAGA,eAAO,MAAM,MAAM,GAAI,KAAK,MAAM,EAAE,KAAK,MAAM,4BAa9C,CAAA;AA6BD,eAAO,MAAM,KAAK,GAAI,KAAK,MAAM,WAmBhC,CAAA"}

View File

@@ -0,0 +1,99 @@
"use strict";
// Tar can encode large and negative numbers using a leading byte of
// 0xff for negative, and 0x80 for positive.
Object.defineProperty(exports, "__esModule", { value: true });
exports.parse = exports.encode = void 0;
const encode = (num, buf) => {
if (!Number.isSafeInteger(num)) {
// The number is so large that javascript cannot represent it with integer
// precision.
throw Error('cannot encode number outside of javascript safe integer range');
}
else if (num < 0) {
encodeNegative(num, buf);
}
else {
encodePositive(num, buf);
}
return buf;
};
exports.encode = encode;
const encodePositive = (num, buf) => {
buf[0] = 0x80;
for (var i = buf.length; i > 1; i--) {
buf[i - 1] = num & 0xff;
num = Math.floor(num / 0x100);
}
};
const encodeNegative = (num, buf) => {
buf[0] = 0xff;
var flipped = false;
num = num * -1;
for (var i = buf.length; i > 1; i--) {
var byte = num & 0xff;
num = Math.floor(num / 0x100);
if (flipped) {
buf[i - 1] = onesComp(byte);
}
else if (byte === 0) {
buf[i - 1] = 0;
}
else {
flipped = true;
buf[i - 1] = twosComp(byte);
}
}
};
const parse = (buf) => {
const pre = buf[0];
const value = pre === 0x80 ? pos(buf.subarray(1, buf.length))
: pre === 0xff ? twos(buf)
: null;
if (value === null) {
throw Error('invalid base256 encoding');
}
if (!Number.isSafeInteger(value)) {
// The number is so large that javascript cannot represent it with integer
// precision.
throw Error('parsed number outside of javascript safe integer range');
}
return value;
};
exports.parse = parse;
const twos = (buf) => {
var len = buf.length;
var sum = 0;
var flipped = false;
for (var i = len - 1; i > -1; i--) {
var byte = Number(buf[i]);
var f;
if (flipped) {
f = onesComp(byte);
}
else if (byte === 0) {
f = byte;
}
else {
flipped = true;
f = twosComp(byte);
}
if (f !== 0) {
sum -= f * Math.pow(256, len - i - 1);
}
}
return sum;
};
const pos = (buf) => {
var len = buf.length;
var sum = 0;
for (var i = len - 1; i > -1; i--) {
var byte = Number(buf[i]);
if (byte !== 0) {
sum += byte * Math.pow(256, len - i - 1);
}
}
return sum;
};
const onesComp = (byte) => (0xff ^ byte) & 0xff;
const twosComp = (byte) => ((0xff ^ byte) + 1) & 0xff;
//# sourceMappingURL=large-numbers.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,7 @@
import { TarOptions } from './options.js';
import { Parser } from './parse.js';
export declare const filesFilter: (opt: TarOptions, files: string[]) => void;
export declare const list: import("./make-command.js").TarCommand<Parser, Parser & {
sync: true;
}>;
//# sourceMappingURL=list.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"list.d.ts","sourceRoot":"","sources":["../../src/list.ts"],"names":[],"mappings":"AAKA,OAAO,EACL,UAAU,EAGX,MAAM,cAAc,CAAA;AACrB,OAAO,EAAE,MAAM,EAAE,MAAM,YAAY,CAAA;AAgBnC,eAAO,MAAM,WAAW,GAAI,KAAK,UAAU,EAAE,OAAO,MAAM,EAAE,SA4B3D,CAAA;AA+DD,eAAO,MAAM,IAAI;UAG4B,IAAI;EAMhD,CAAA"}

View File

@@ -0,0 +1,150 @@
"use strict";
var __createBinding = (this && this.__createBinding) || (Object.create ? (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
var desc = Object.getOwnPropertyDescriptor(m, k);
if (!desc || ("get" in desc ? !m.__esModule : desc.writable || desc.configurable)) {
desc = { enumerable: true, get: function() { return m[k]; } };
}
Object.defineProperty(o, k2, desc);
}) : (function(o, m, k, k2) {
if (k2 === undefined) k2 = k;
o[k2] = m[k];
}));
var __setModuleDefault = (this && this.__setModuleDefault) || (Object.create ? (function(o, v) {
Object.defineProperty(o, "default", { enumerable: true, value: v });
}) : function(o, v) {
o["default"] = v;
});
var __importStar = (this && this.__importStar) || (function () {
var ownKeys = function(o) {
ownKeys = Object.getOwnPropertyNames || function (o) {
var ar = [];
for (var k in o) if (Object.prototype.hasOwnProperty.call(o, k)) ar[ar.length] = k;
return ar;
};
return ownKeys(o);
};
return function (mod) {
if (mod && mod.__esModule) return mod;
var result = {};
if (mod != null) for (var k = ownKeys(mod), i = 0; i < k.length; i++) if (k[i] !== "default") __createBinding(result, mod, k[i]);
__setModuleDefault(result, mod);
return result;
};
})();
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.list = exports.filesFilter = void 0;
// tar -t
const fsm = __importStar(require("@isaacs/fs-minipass"));
const node_fs_1 = __importDefault(require("node:fs"));
const path_1 = require("path");
const make_command_js_1 = require("./make-command.js");
const parse_js_1 = require("./parse.js");
const strip_trailing_slashes_js_1 = require("./strip-trailing-slashes.js");
const onReadEntryFunction = (opt) => {
const onReadEntry = opt.onReadEntry;
opt.onReadEntry =
onReadEntry ?
e => {
onReadEntry(e);
e.resume();
}
: e => e.resume();
};
// construct a filter that limits the file entries listed
// include child entries if a dir is included
const filesFilter = (opt, files) => {
const map = new Map(files.map(f => [(0, strip_trailing_slashes_js_1.stripTrailingSlashes)(f), true]));
const filter = opt.filter;
const mapHas = (file, r = '') => {
const root = r || (0, path_1.parse)(file).root || '.';
let ret;
if (file === root)
ret = false;
else {
const m = map.get(file);
if (m !== undefined) {
ret = m;
}
else {
ret = mapHas((0, path_1.dirname)(file), root);
}
}
map.set(file, ret);
return ret;
};
opt.filter =
filter ?
(file, entry) => filter(file, entry) && mapHas((0, strip_trailing_slashes_js_1.stripTrailingSlashes)(file))
: file => mapHas((0, strip_trailing_slashes_js_1.stripTrailingSlashes)(file));
};
exports.filesFilter = filesFilter;
const listFileSync = (opt) => {
const p = new parse_js_1.Parser(opt);
const file = opt.file;
let fd;
try {
fd = node_fs_1.default.openSync(file, 'r');
const stat = node_fs_1.default.fstatSync(fd);
const readSize = opt.maxReadSize || 16 * 1024 * 1024;
if (stat.size < readSize) {
const buf = Buffer.allocUnsafe(stat.size);
const read = node_fs_1.default.readSync(fd, buf, 0, stat.size, 0);
p.end(read === buf.byteLength ? buf : buf.subarray(0, read));
}
else {
let pos = 0;
const buf = Buffer.allocUnsafe(readSize);
while (pos < stat.size) {
const bytesRead = node_fs_1.default.readSync(fd, buf, 0, readSize, pos);
if (bytesRead === 0)
break;
pos += bytesRead;
p.write(buf.subarray(0, bytesRead));
}
p.end();
}
}
finally {
if (typeof fd === 'number') {
try {
node_fs_1.default.closeSync(fd);
/* c8 ignore next */
}
catch (er) { }
}
}
};
const listFile = (opt, _files) => {
const parse = new parse_js_1.Parser(opt);
const readSize = opt.maxReadSize || 16 * 1024 * 1024;
const file = opt.file;
const p = new Promise((resolve, reject) => {
parse.on('error', reject);
parse.on('end', resolve);
node_fs_1.default.stat(file, (er, stat) => {
if (er) {
reject(er);
}
else {
const stream = new fsm.ReadStream(file, {
readSize: readSize,
size: stat.size,
});
stream.on('error', reject);
stream.pipe(parse);
}
});
});
return p;
};
exports.list = (0, make_command_js_1.makeCommand)(listFileSync, listFile, opt => new parse_js_1.Parser(opt), opt => new parse_js_1.Parser(opt), (opt, files) => {
if (files?.length)
(0, exports.filesFilter)(opt, files);
if (!opt.noResume)
onReadEntryFunction(opt);
});
//# sourceMappingURL=list.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,49 @@
import { TarOptions, TarOptionsAsyncFile, TarOptionsAsyncNoFile, TarOptionsSyncFile, TarOptionsSyncNoFile, TarOptionsWithAliases, TarOptionsWithAliasesAsync, TarOptionsWithAliasesAsyncFile, TarOptionsWithAliasesAsyncNoFile, TarOptionsWithAliasesFile, TarOptionsWithAliasesNoFile, TarOptionsWithAliasesSync, TarOptionsWithAliasesSyncFile, TarOptionsWithAliasesSyncNoFile } from './options.js';
export type CB = (er?: Error) => any;
export type TarCommand<AsyncClass, SyncClass extends {
sync: true;
}> = {
(): AsyncClass;
(opt: TarOptionsWithAliasesAsyncNoFile): AsyncClass;
(entries: string[]): AsyncClass;
(opt: TarOptionsWithAliasesAsyncNoFile, entries: string[]): AsyncClass;
} & {
(opt: TarOptionsWithAliasesSyncNoFile): SyncClass;
(opt: TarOptionsWithAliasesSyncNoFile, entries: string[]): SyncClass;
} & {
(opt: TarOptionsWithAliasesAsyncFile): Promise<void>;
(opt: TarOptionsWithAliasesAsyncFile, entries: string[]): Promise<void>;
(opt: TarOptionsWithAliasesAsyncFile, cb: CB): Promise<void>;
(opt: TarOptionsWithAliasesAsyncFile, entries: string[], cb: CB): Promise<void>;
} & {
(opt: TarOptionsWithAliasesSyncFile): void;
(opt: TarOptionsWithAliasesSyncFile, entries: string[]): void;
} & {
(opt: TarOptionsWithAliasesSync): typeof opt extends (TarOptionsWithAliasesFile) ? void : typeof opt extends TarOptionsWithAliasesNoFile ? SyncClass : void | SyncClass;
(opt: TarOptionsWithAliasesSync, entries: string[]): typeof opt extends TarOptionsWithAliasesFile ? void : typeof opt extends TarOptionsWithAliasesNoFile ? SyncClass : void | SyncClass;
} & {
(opt: TarOptionsWithAliasesAsync): typeof opt extends (TarOptionsWithAliasesFile) ? Promise<void> : typeof opt extends TarOptionsWithAliasesNoFile ? AsyncClass : Promise<void> | AsyncClass;
(opt: TarOptionsWithAliasesAsync, entries: string[]): typeof opt extends TarOptionsWithAliasesFile ? Promise<void> : typeof opt extends TarOptionsWithAliasesNoFile ? AsyncClass : Promise<void> | AsyncClass;
(opt: TarOptionsWithAliasesAsync, cb: CB): Promise<void>;
(opt: TarOptionsWithAliasesAsync, entries: string[], cb: CB): typeof opt extends TarOptionsWithAliasesFile ? Promise<void> : typeof opt extends TarOptionsWithAliasesNoFile ? never : Promise<void>;
} & {
(opt: TarOptionsWithAliasesFile): Promise<void> | void;
(opt: TarOptionsWithAliasesFile, entries: string[]): typeof opt extends TarOptionsWithAliasesSync ? void : typeof opt extends TarOptionsWithAliasesAsync ? Promise<void> : Promise<void> | void;
(opt: TarOptionsWithAliasesFile, cb: CB): Promise<void>;
(opt: TarOptionsWithAliasesFile, entries: string[], cb: CB): typeof opt extends TarOptionsWithAliasesSync ? never : typeof opt extends TarOptionsWithAliasesAsync ? Promise<void> : Promise<void>;
} & {
(opt: TarOptionsWithAliasesNoFile): typeof opt extends (TarOptionsWithAliasesSync) ? SyncClass : typeof opt extends TarOptionsWithAliasesAsync ? AsyncClass : SyncClass | AsyncClass;
(opt: TarOptionsWithAliasesNoFile, entries: string[]): typeof opt extends TarOptionsWithAliasesSync ? SyncClass : typeof opt extends TarOptionsWithAliasesAsync ? AsyncClass : SyncClass | AsyncClass;
} & {
(opt: TarOptionsWithAliases): typeof opt extends (TarOptionsWithAliasesFile) ? typeof opt extends TarOptionsWithAliasesSync ? void : typeof opt extends TarOptionsWithAliasesAsync ? Promise<void> : void | Promise<void> : typeof opt extends TarOptionsWithAliasesNoFile ? typeof opt extends TarOptionsWithAliasesSync ? SyncClass : typeof opt extends TarOptionsWithAliasesAsync ? AsyncClass : SyncClass | AsyncClass : typeof opt extends TarOptionsWithAliasesSync ? SyncClass | void : typeof opt extends TarOptionsWithAliasesAsync ? AsyncClass | Promise<void> : SyncClass | void | AsyncClass | Promise<void>;
} & {
syncFile: (opt: TarOptionsSyncFile, entries: string[]) => void;
asyncFile: (opt: TarOptionsAsyncFile, entries: string[], cb?: CB) => Promise<void>;
syncNoFile: (opt: TarOptionsSyncNoFile, entries: string[]) => SyncClass;
asyncNoFile: (opt: TarOptionsAsyncNoFile, entries: string[]) => AsyncClass;
validate?: (opt: TarOptions, entries?: string[]) => void;
};
export declare const makeCommand: <AsyncClass, SyncClass extends {
sync: true;
}>(syncFile: (opt: TarOptionsSyncFile, entries: string[]) => void, asyncFile: (opt: TarOptionsAsyncFile, entries: string[], cb?: CB) => Promise<void>, syncNoFile: (opt: TarOptionsSyncNoFile, entries: string[]) => SyncClass, asyncNoFile: (opt: TarOptionsAsyncNoFile, entries: string[]) => AsyncClass, validate?: (opt: TarOptions, entries?: string[]) => void) => TarCommand<AsyncClass, SyncClass>;
//# sourceMappingURL=make-command.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"make-command.d.ts","sourceRoot":"","sources":["../../src/make-command.ts"],"names":[],"mappings":"AAAA,OAAO,EAML,UAAU,EACV,mBAAmB,EACnB,qBAAqB,EACrB,kBAAkB,EAClB,oBAAoB,EACpB,qBAAqB,EACrB,0BAA0B,EAC1B,8BAA8B,EAC9B,gCAAgC,EAChC,yBAAyB,EACzB,2BAA2B,EAC3B,yBAAyB,EACzB,6BAA6B,EAC7B,+BAA+B,EAChC,MAAM,cAAc,CAAA;AAErB,MAAM,MAAM,EAAE,GAAG,CAAC,EAAE,CAAC,EAAE,KAAK,KAAK,GAAG,CAAA;AAEpC,MAAM,MAAM,UAAU,CACpB,UAAU,EACV,SAAS,SAAS;IAAE,IAAI,EAAE,IAAI,CAAA;CAAE,IAC9B;IAEF,IAAI,UAAU,CAAA;IACd,CAAC,GAAG,EAAE,gCAAgC,GAAG,UAAU,CAAA;IACnD,CAAC,OAAO,EAAE,MAAM,EAAE,GAAG,UAAU,CAAA;IAC/B,CACE,GAAG,EAAE,gCAAgC,EACrC,OAAO,EAAE,MAAM,EAAE,GAChB,UAAU,CAAA;CACd,GAAG;IAEF,CAAC,GAAG,EAAE,+BAA+B,GAAG,SAAS,CAAA;IACjD,CAAC,GAAG,EAAE,+BAA+B,EAAE,OAAO,EAAE,MAAM,EAAE,GAAG,SAAS,CAAA;CACrE,GAAG;IAEF,CAAC,GAAG,EAAE,8BAA8B,GAAG,OAAO,CAAC,IAAI,CAAC,CAAA;IACpD,CACE,GAAG,EAAE,8BAA8B,EACnC,OAAO,EAAE,MAAM,EAAE,GAChB,OAAO,CAAC,IAAI,CAAC,CAAA;IAChB,CAAC,GAAG,EAAE,8BAA8B,EAAE,EAAE,EAAE,EAAE,GAAG,OAAO,CAAC,IAAI,CAAC,CAAA;IAC5D,CACE,GAAG,EAAE,8BAA8B,EACnC,OAAO,EAAE,MAAM,EAAE,EACjB,EAAE,EAAE,EAAE,GACL,OAAO,CAAC,IAAI,CAAC,CAAA;CACjB,GAAG;IAEF,CAAC,GAAG,EAAE,6BAA6B,GAAG,IAAI,CAAA;IAC1C,CAAC,GAAG,EAAE,6BAA6B,EAAE,OAAO,EAAE,MAAM,EAAE,GAAG,IAAI,CAAA;CAC9D,GAAG;IAEF,CAAC,GAAG,EAAE,yBAAyB,GAAG,OAAO,GAAG,SAAS,CACnD,yBAAyB,CAC1B,GACC,IAAI,GACJ,OAAO,GAAG,SAAS,2BAA2B,GAAG,SAAS,GAC1D,IAAI,GAAG,SAAS,CAAA;IAClB,CACE,GAAG,EAAE,yBAAyB,EAC9B,OAAO,EAAE,MAAM,EAAE,GAChB,OAAO,GAAG,SAAS,yBAAyB,GAAG,IAAI,GACpD,OAAO,GAAG,SAAS,2BAA2B,GAAG,SAAS,GAC1D,IAAI,GAAG,SAAS,CAAA;CACnB,GAAG;IAEF,CAAC,GAAG,EAAE,0BAA0B,GAAG,OAAO,GAAG,SAAS,CACpD,yBAAyB,CAC1B,GACC,OAAO,CAAC,IAAI,CAAC,GACb,OAAO,GAAG,SAAS,2BAA2B,GAAG,UAAU,GAC3D,OAAO,CAAC,IAAI,CAAC,GAAG,UAAU,CAAA;IAC5B,CACE,GAAG,EAAE,0BAA0B,EAC/B,OAAO,EAAE,MAAM,EAAE,GAChB,OAAO,GAAG,SAAS,yBAAyB,GAAG,OAAO,CAAC,IAAI,CAAC,GAC7D,OAAO,GAAG,SAAS,2BAA2B,GAAG,UAAU,GAC3D,OAAO,CAAC,IAAI,CAAC,GAAG,UAAU,CAAA;IAC5B,CAAC,GAAG,EAAE,0BAA0B,EAAE,EAAE,EAAE,EAAE,GAAG,OAAO,CAAC,IAAI,CAAC,CAAA;IACxD,CACE,GAAG,EAAE,0BAA0B,EAC/B,OAAO,EAAE,MAAM,EAAE,EACjB,EAAE,EAAE,EAAE,GACL,OAAO,GAAG,SAAS,yBAAyB,GAAG,OAAO,CAAC,IAAI,CAAC,GAC7D,OAAO,GAAG,SAAS,2BAA2B,GAAG,KAAK,GACtD,OAAO,CAAC,IAAI,CAAC,CAAA;CAChB,GAAG;IAEF,CAAC,GAAG,EAAE,yBAAyB,GAAG,OAAO,CAAC,IAAI,CAAC,GAAG,IAAI,CAAA;IACtD,CACE,GAAG,EAAE,yBAAyB,EAC9B,OAAO,EAAE,MAAM,EAAE,GAChB,OAAO,GAAG,SAAS,yBAAyB,GAAG,IAAI,GACpD,OAAO,GAAG,SAAS,0BAA0B,GAAG,OAAO,CAAC,IAAI,CAAC,GAC7D,OAAO,CAAC,IAAI,CAAC,GAAG,IAAI,CAAA;IACtB,CAAC,GAAG,EAAE,yBAAyB,EAAE,EAAE,EAAE,EAAE,GAAG,OAAO,CAAC,IAAI,CAAC,CAAA;IACvD,CACE,GAAG,EAAE,yBAAyB,EAC9B,OAAO,EAAE,MAAM,EAAE,EACjB,EAAE,EAAE,EAAE,GACL,OAAO,GAAG,SAAS,yBAAyB,GAAG,KAAK,GACrD,OAAO,GAAG,SAAS,0BAA0B,GAAG,OAAO,CAAC,IAAI,CAAC,GAC7D,OAAO,CAAC,IAAI,CAAC,CAAA;CAChB,GAAG;IAEF,CAAC,GAAG,EAAE,2BAA2B,GAAG,OAAO,GAAG,SAAS,CACrD,yBAAyB,CAC1B,GACC,SAAS,GACT,OAAO,GAAG,SAAS,0BAA0B,GAAG,UAAU,GAC1D,SAAS,GAAG,UAAU,CAAA;IACxB,CACE,GAAG,EAAE,2BAA2B,EAChC,OAAO,EAAE,MAAM,EAAE,GAChB,OAAO,GAAG,SAAS,yBAAyB,GAAG,SAAS,GACzD,OAAO,GAAG,SAAS,0BAA0B,GAAG,UAAU,GAC1D,SAAS,GAAG,UAAU,CAAA;CACzB,GAAG;IAEF,CAAC,GAAG,EAAE,qBAAqB,GAAG,OAAO,GAAG,SAAS,CAC/C,yBAAyB,CAC1B,GACC,OAAO,GAAG,SAAS,yBAAyB,GAAG,IAAI,GACjD,OAAO,GAAG,SAAS,0BAA0B,GAAG,OAAO,CAAC,IAAI,CAAC,GAC7D,IAAI,GAAG,OAAO,CAAC,IAAI,CAAC,GACtB,OAAO,GAAG,SAAS,2BAA2B,GAC9C,OAAO,GAAG,SAAS,yBAAyB,GAAG,SAAS,GACtD,OAAO,GAAG,SAAS,0BAA0B,GAAG,UAAU,GAC1D,SAAS,GAAG,UAAU,GACxB,OAAO,GAAG,SAAS,yBAAyB,GAAG,SAAS,GAAG,IAAI,GAC/D,OAAO,GAAG,SAAS,0BAA0B,GAC7C,UAAU,GAAG,OAAO,CAAC,IAAI,CAAC,GAC1B,SAAS,GAAG,IAAI,GAAG,UAAU,GAAG,OAAO,CAAC,IAAI,CAAC,CAAA;CAChD,GAAG;IAEF,QAAQ,EAAE,CAAC,GAAG,EAAE,kBAAkB,EAAE,OAAO,EAAE,MAAM,EAAE,KAAK,IAAI,CAAA;IAC9D,SAAS,EAAE,CACT,GAAG,EAAE,mBAAmB,EACxB,OAAO,EAAE,MAAM,EAAE,EACjB,EAAE,CAAC,EAAE,EAAE,KACJ,OAAO,CAAC,IAAI,CAAC,CAAA;IAClB,UAAU,EAAE,CACV,GAAG,EAAE,oBAAoB,EACzB,OAAO,EAAE,MAAM,EAAE,KACd,SAAS,CAAA;IACd,WAAW,EAAE,CACX,GAAG,EAAE,qBAAqB,EAC1B,OAAO,EAAE,MAAM,EAAE,KACd,UAAU,CAAA;IACf,QAAQ,CAAC,EAAE,CAAC,GAAG,EAAE,UAAU,EAAE,OAAO,CAAC,EAAE,MAAM,EAAE,KAAK,IAAI,CAAA;CACzD,CAAA;AAED,eAAO,MAAM,WAAW,GACtB,UAAU,EACV,SAAS,SAAS;IAAE,IAAI,EAAE,IAAI,CAAA;CAAE,EAEhC,UAAU,CAAC,GAAG,EAAE,kBAAkB,EAAE,OAAO,EAAE,MAAM,EAAE,KAAK,IAAI,EAC9D,WAAW,CACT,GAAG,EAAE,mBAAmB,EACxB,OAAO,EAAE,MAAM,EAAE,EACjB,EAAE,CAAC,EAAE,EAAE,KACJ,OAAO,CAAC,IAAI,CAAC,EAClB,YAAY,CACV,GAAG,EAAE,oBAAoB,EACzB,OAAO,EAAE,MAAM,EAAE,KACd,SAAS,EACd,aAAa,CACX,GAAG,EAAE,qBAAqB,EAC1B,OAAO,EAAE,MAAM,EAAE,KACd,UAAU,EACf,WAAW,CAAC,GAAG,EAAE,UAAU,EAAE,OAAO,CAAC,EAAE,MAAM,EAAE,KAAK,IAAI,KACvD,UAAU,CAAC,UAAU,EAAE,SAAS,CAmElC,CAAA"}

View File

@@ -0,0 +1,61 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.makeCommand = void 0;
const options_js_1 = require("./options.js");
const makeCommand = (syncFile, asyncFile, syncNoFile, asyncNoFile, validate) => {
return Object.assign((opt_ = [], entries, cb) => {
if (Array.isArray(opt_)) {
entries = opt_;
opt_ = {};
}
if (typeof entries === 'function') {
cb = entries;
entries = undefined;
}
if (!entries) {
entries = [];
}
else {
entries = Array.from(entries);
}
const opt = (0, options_js_1.dealias)(opt_);
validate?.(opt, entries);
if ((0, options_js_1.isSyncFile)(opt)) {
if (typeof cb === 'function') {
throw new TypeError('callback not supported for sync tar functions');
}
return syncFile(opt, entries);
}
else if ((0, options_js_1.isAsyncFile)(opt)) {
const p = asyncFile(opt, entries);
// weirdness to make TS happy
const c = cb ? cb : undefined;
return c ? p.then(() => c(), c) : p;
}
else if ((0, options_js_1.isSyncNoFile)(opt)) {
if (typeof cb === 'function') {
throw new TypeError('callback not supported for sync tar functions');
}
return syncNoFile(opt, entries);
}
else if ((0, options_js_1.isAsyncNoFile)(opt)) {
if (typeof cb === 'function') {
throw new TypeError('callback only supported with file option');
}
return asyncNoFile(opt, entries);
/* c8 ignore start */
}
else {
throw new Error('impossible options??');
}
/* c8 ignore stop */
}, {
syncFile,
asyncFile,
syncNoFile,
asyncNoFile,
validate,
});
};
exports.makeCommand = makeCommand;
//# sourceMappingURL=make-command.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,25 @@
import { CwdError } from './cwd-error.js';
import { SymlinkError } from './symlink-error.js';
export type MkdirOptions = {
uid?: number;
gid?: number;
processUid?: number;
processGid?: number;
umask?: number;
preserve: boolean;
unlink: boolean;
cwd: string;
mode: number;
};
export type MkdirError = NodeJS.ErrnoException | CwdError | SymlinkError;
/**
* Wrapper around fs/promises.mkdir for tar's needs.
*
* The main purpose is to avoid creating directories if we know that
* they already exist (and track which ones exist for this purpose),
* and prevent entries from being extracted into symlinked folders,
* if `preservePaths` is not set.
*/
export declare const mkdir: (dir: string, opt: MkdirOptions, cb: (er?: null | MkdirError, made?: string) => void) => void | Promise<void>;
export declare const mkdirSync: (dir: string, opt: MkdirOptions) => void | SymlinkError;
//# sourceMappingURL=mkdir.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"mkdir.d.ts","sourceRoot":"","sources":["../../src/mkdir.ts"],"names":[],"mappings":"AAIA,OAAO,EAAE,QAAQ,EAAE,MAAM,gBAAgB,CAAA;AAEzC,OAAO,EAAE,YAAY,EAAE,MAAM,oBAAoB,CAAA;AAEjD,MAAM,MAAM,YAAY,GAAG;IACzB,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,GAAG,CAAC,EAAE,MAAM,CAAA;IACZ,UAAU,CAAC,EAAE,MAAM,CAAA;IACnB,UAAU,CAAC,EAAE,MAAM,CAAA;IACnB,KAAK,CAAC,EAAE,MAAM,CAAA;IACd,QAAQ,EAAE,OAAO,CAAA;IACjB,MAAM,EAAE,OAAO,CAAA;IACf,GAAG,EAAE,MAAM,CAAA;IACX,IAAI,EAAE,MAAM,CAAA;CACb,CAAA;AAED,MAAM,MAAM,UAAU,GAClB,MAAM,CAAC,cAAc,GACrB,QAAQ,GACR,YAAY,CAAA;AAiBhB;;;;;;;GAOG;AACH,eAAO,MAAM,KAAK,GAChB,KAAK,MAAM,EACX,KAAK,YAAY,EACjB,IAAI,CAAC,EAAE,CAAC,EAAE,IAAI,GAAG,UAAU,EAAE,IAAI,CAAC,EAAE,MAAM,KAAK,IAAI,yBAoDpD,CAAA;AAiFD,eAAO,MAAM,SAAS,GAAI,KAAK,MAAM,EAAE,KAAK,YAAY,wBAqEvD,CAAA"}

View File

@@ -0,0 +1,188 @@
"use strict";
var __importDefault = (this && this.__importDefault) || function (mod) {
return (mod && mod.__esModule) ? mod : { "default": mod };
};
Object.defineProperty(exports, "__esModule", { value: true });
exports.mkdirSync = exports.mkdir = void 0;
const chownr_1 = require("chownr");
const node_fs_1 = __importDefault(require("node:fs"));
const promises_1 = __importDefault(require("node:fs/promises"));
const node_path_1 = __importDefault(require("node:path"));
const cwd_error_js_1 = require("./cwd-error.js");
const normalize_windows_path_js_1 = require("./normalize-windows-path.js");
const symlink_error_js_1 = require("./symlink-error.js");
const checkCwd = (dir, cb) => {
node_fs_1.default.stat(dir, (er, st) => {
if (er || !st.isDirectory()) {
er = new cwd_error_js_1.CwdError(dir, er?.code || 'ENOTDIR');
}
cb(er);
});
};
/**
* Wrapper around fs/promises.mkdir for tar's needs.
*
* The main purpose is to avoid creating directories if we know that
* they already exist (and track which ones exist for this purpose),
* and prevent entries from being extracted into symlinked folders,
* if `preservePaths` is not set.
*/
const mkdir = (dir, opt, cb) => {
dir = (0, normalize_windows_path_js_1.normalizeWindowsPath)(dir);
// if there's any overlap between mask and mode,
// then we'll need an explicit chmod
/* c8 ignore next */
const umask = opt.umask ?? 0o22;
const mode = opt.mode | 0o0700;
const needChmod = (mode & umask) !== 0;
const uid = opt.uid;
const gid = opt.gid;
const doChown = typeof uid === 'number' &&
typeof gid === 'number' &&
(uid !== opt.processUid || gid !== opt.processGid);
const preserve = opt.preserve;
const unlink = opt.unlink;
const cwd = (0, normalize_windows_path_js_1.normalizeWindowsPath)(opt.cwd);
const done = (er, created) => {
if (er) {
cb(er);
}
else {
if (created && doChown) {
(0, chownr_1.chownr)(created, uid, gid, er => done(er));
}
else if (needChmod) {
node_fs_1.default.chmod(dir, mode, cb);
}
else {
cb();
}
}
};
if (dir === cwd) {
return checkCwd(dir, done);
}
if (preserve) {
return promises_1.default.mkdir(dir, { mode, recursive: true }).then(made => done(null, made ?? undefined), // oh, ts
done);
}
const sub = (0, normalize_windows_path_js_1.normalizeWindowsPath)(node_path_1.default.relative(cwd, dir));
const parts = sub.split('/');
mkdir_(cwd, parts, mode, unlink, cwd, undefined, done);
};
exports.mkdir = mkdir;
const mkdir_ = (base, parts, mode, unlink, cwd, created, cb) => {
if (!parts.length) {
return cb(null, created);
}
const p = parts.shift();
const part = (0, normalize_windows_path_js_1.normalizeWindowsPath)(node_path_1.default.resolve(base + '/' + p));
node_fs_1.default.mkdir(part, mode, onmkdir(part, parts, mode, unlink, cwd, created, cb));
};
const onmkdir = (part, parts, mode, unlink, cwd, created, cb) => (er) => {
if (er) {
node_fs_1.default.lstat(part, (statEr, st) => {
if (statEr) {
statEr.path =
statEr.path && (0, normalize_windows_path_js_1.normalizeWindowsPath)(statEr.path);
cb(statEr);
}
else if (st.isDirectory()) {
mkdir_(part, parts, mode, unlink, cwd, created, cb);
}
else if (unlink) {
node_fs_1.default.unlink(part, er => {
if (er) {
return cb(er);
}
node_fs_1.default.mkdir(part, mode, onmkdir(part, parts, mode, unlink, cwd, created, cb));
});
}
else if (st.isSymbolicLink()) {
return cb(new symlink_error_js_1.SymlinkError(part, part + '/' + parts.join('/')));
}
else {
cb(er);
}
});
}
else {
created = created || part;
mkdir_(part, parts, mode, unlink, cwd, created, cb);
}
};
const checkCwdSync = (dir) => {
let ok = false;
let code = undefined;
try {
ok = node_fs_1.default.statSync(dir).isDirectory();
}
catch (er) {
code = er?.code;
}
finally {
if (!ok) {
throw new cwd_error_js_1.CwdError(dir, code ?? 'ENOTDIR');
}
}
};
const mkdirSync = (dir, opt) => {
dir = (0, normalize_windows_path_js_1.normalizeWindowsPath)(dir);
// if there's any overlap between mask and mode,
// then we'll need an explicit chmod
/* c8 ignore next */
const umask = opt.umask ?? 0o22;
const mode = opt.mode | 0o700;
const needChmod = (mode & umask) !== 0;
const uid = opt.uid;
const gid = opt.gid;
const doChown = typeof uid === 'number' &&
typeof gid === 'number' &&
(uid !== opt.processUid || gid !== opt.processGid);
const preserve = opt.preserve;
const unlink = opt.unlink;
const cwd = (0, normalize_windows_path_js_1.normalizeWindowsPath)(opt.cwd);
const done = (created) => {
if (created && doChown) {
(0, chownr_1.chownrSync)(created, uid, gid);
}
if (needChmod) {
node_fs_1.default.chmodSync(dir, mode);
}
};
if (dir === cwd) {
checkCwdSync(cwd);
return done();
}
if (preserve) {
return done(node_fs_1.default.mkdirSync(dir, { mode, recursive: true }) ?? undefined);
}
const sub = (0, normalize_windows_path_js_1.normalizeWindowsPath)(node_path_1.default.relative(cwd, dir));
const parts = sub.split('/');
let created = undefined;
for (let p = parts.shift(), part = cwd; p && (part += '/' + p); p = parts.shift()) {
part = (0, normalize_windows_path_js_1.normalizeWindowsPath)(node_path_1.default.resolve(part));
try {
node_fs_1.default.mkdirSync(part, mode);
created = created || part;
}
catch (er) {
const st = node_fs_1.default.lstatSync(part);
if (st.isDirectory()) {
continue;
}
else if (unlink) {
node_fs_1.default.unlinkSync(part);
node_fs_1.default.mkdirSync(part, mode);
created = created || part;
continue;
}
else if (st.isSymbolicLink()) {
return new symlink_error_js_1.SymlinkError(part, part + '/' + parts.join('/'));
}
}
}
return done(created);
};
exports.mkdirSync = mkdirSync;
//# sourceMappingURL=mkdir.js.map

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,2 @@
export declare const modeFix: (mode: number, isDir: boolean, portable: boolean) => number;
//# sourceMappingURL=mode-fix.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"mode-fix.d.ts","sourceRoot":"","sources":["../../src/mode-fix.ts"],"names":[],"mappings":"AAAA,eAAO,MAAM,OAAO,GAClB,MAAM,MAAM,EACZ,OAAO,OAAO,EACd,UAAU,OAAO,WA0BlB,CAAA"}

View File

@@ -0,0 +1,29 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.modeFix = void 0;
const modeFix = (mode, isDir, portable) => {
mode &= 0o7777;
// in portable mode, use the minimum reasonable umask
// if this system creates files with 0o664 by default
// (as some linux distros do), then we'll write the
// archive with 0o644 instead. Also, don't ever create
// a file that is not readable/writable by the owner.
if (portable) {
mode = (mode | 0o600) & ~0o22;
}
// if dirs are readable, then they should be listable
if (isDir) {
if (mode & 0o400) {
mode |= 0o100;
}
if (mode & 0o40) {
mode |= 0o10;
}
if (mode & 0o4) {
mode |= 0o1;
}
}
return mode;
};
exports.modeFix = modeFix;
//# sourceMappingURL=mode-fix.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"mode-fix.js","sourceRoot":"","sources":["../../src/mode-fix.ts"],"names":[],"mappings":";;;AAAO,MAAM,OAAO,GAAG,CACrB,IAAY,EACZ,KAAc,EACd,QAAiB,EACjB,EAAE;IACF,IAAI,IAAI,MAAM,CAAA;IAEd,qDAAqD;IACrD,qDAAqD;IACrD,mDAAmD;IACnD,uDAAuD;IACvD,qDAAqD;IACrD,IAAI,QAAQ,EAAE,CAAC;QACb,IAAI,GAAG,CAAC,IAAI,GAAG,KAAK,CAAC,GAAG,CAAC,IAAI,CAAA;IAC/B,CAAC;IAED,qDAAqD;IACrD,IAAI,KAAK,EAAE,CAAC;QACV,IAAI,IAAI,GAAG,KAAK,EAAE,CAAC;YACjB,IAAI,IAAI,KAAK,CAAA;QACf,CAAC;QACD,IAAI,IAAI,GAAG,IAAI,EAAE,CAAC;YAChB,IAAI,IAAI,IAAI,CAAA;QACd,CAAC;QACD,IAAI,IAAI,GAAG,GAAG,EAAE,CAAC;YACf,IAAI,IAAI,GAAG,CAAA;QACb,CAAC;IACH,CAAC;IACD,OAAO,IAAI,CAAA;AACb,CAAC,CAAA;AA7BY,QAAA,OAAO,WA6BnB","sourcesContent":["export const modeFix = (\n mode: number,\n isDir: boolean,\n portable: boolean,\n) => {\n mode &= 0o7777\n\n // in portable mode, use the minimum reasonable umask\n // if this system creates files with 0o664 by default\n // (as some linux distros do), then we'll write the\n // archive with 0o644 instead. Also, don't ever create\n // a file that is not readable/writable by the owner.\n if (portable) {\n mode = (mode | 0o600) & ~0o22\n }\n\n // if dirs are readable, then they should be listable\n if (isDir) {\n if (mode & 0o400) {\n mode |= 0o100\n }\n if (mode & 0o40) {\n mode |= 0o10\n }\n if (mode & 0o4) {\n mode |= 0o1\n }\n }\n return mode\n}\n"]}

View File

@@ -0,0 +1,2 @@
export declare const normalizeUnicode: (s: string) => string;
//# sourceMappingURL=normalize-unicode.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"normalize-unicode.d.ts","sourceRoot":"","sources":["../../src/normalize-unicode.ts"],"names":[],"mappings":"AASA,eAAO,MAAM,gBAAgB,GAAI,GAAG,MAAM,KAAG,MAyB5C,CAAA"}

View File

@@ -0,0 +1,38 @@
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.normalizeUnicode = void 0;
// warning: extremely hot code path.
// This has been meticulously optimized for use
// within npm install on large package trees.
// Do not edit without careful benchmarking.
const normalizeCache = Object.create(null);
// Limit the size of this. Very low-sophistication LRU cache
const MAX = 10000;
const cache = new Set();
const normalizeUnicode = (s) => {
if (!cache.has(s)) {
// shake out identical accents and ligatures
normalizeCache[s] = s
.normalize('NFD')
.toLocaleLowerCase('en')
.toLocaleUpperCase('en');
}
else {
cache.delete(s);
}
cache.add(s);
const ret = normalizeCache[s];
let i = cache.size - MAX;
// only prune when we're 10% over the max
if (i > MAX / 10) {
for (const s of cache) {
cache.delete(s);
delete normalizeCache[s];
if (--i <= 0)
break;
}
}
return ret;
};
exports.normalizeUnicode = normalizeUnicode;
//# sourceMappingURL=normalize-unicode.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"normalize-unicode.js","sourceRoot":"","sources":["../../src/normalize-unicode.ts"],"names":[],"mappings":";;;AAAA,oCAAoC;AACpC,+CAA+C;AAC/C,6CAA6C;AAC7C,4CAA4C;AAC5C,MAAM,cAAc,GAA2B,MAAM,CAAC,MAAM,CAAC,IAAI,CAAC,CAAA;AAElE,4DAA4D;AAC5D,MAAM,GAAG,GAAG,KAAK,CAAA;AACjB,MAAM,KAAK,GAAG,IAAI,GAAG,EAAU,CAAA;AACxB,MAAM,gBAAgB,GAAG,CAAC,CAAS,EAAU,EAAE;IACpD,IAAI,CAAC,KAAK,CAAC,GAAG,CAAC,CAAC,CAAC,EAAE,CAAC;QAClB,4CAA4C;QAC5C,cAAc,CAAC,CAAC,CAAC,GAAG,CAAC;aAClB,SAAS,CAAC,KAAK,CAAC;aAChB,iBAAiB,CAAC,IAAI,CAAC;aACvB,iBAAiB,CAAC,IAAI,CAAC,CAAA;IAC5B,CAAC;SAAM,CAAC;QACN,KAAK,CAAC,MAAM,CAAC,CAAC,CAAC,CAAA;IACjB,CAAC;IACD,KAAK,CAAC,GAAG,CAAC,CAAC,CAAC,CAAA;IAEZ,MAAM,GAAG,GAAG,cAAc,CAAC,CAAC,CAAW,CAAA;IAEvC,IAAI,CAAC,GAAG,KAAK,CAAC,IAAI,GAAG,GAAG,CAAA;IACxB,yCAAyC;IACzC,IAAI,CAAC,GAAG,GAAG,GAAG,EAAE,EAAE,CAAC;QACjB,KAAK,MAAM,CAAC,IAAI,KAAK,EAAE,CAAC;YACtB,KAAK,CAAC,MAAM,CAAC,CAAC,CAAC,CAAA;YACf,OAAO,cAAc,CAAC,CAAC,CAAC,CAAA;YACxB,IAAI,EAAE,CAAC,IAAI,CAAC;gBAAE,MAAK;QACrB,CAAC;IACH,CAAC;IAED,OAAO,GAAG,CAAA;AACZ,CAAC,CAAA;AAzBY,QAAA,gBAAgB,oBAyB5B","sourcesContent":["// warning: extremely hot code path.\n// This has been meticulously optimized for use\n// within npm install on large package trees.\n// Do not edit without careful benchmarking.\nconst normalizeCache: Record<string, string> = Object.create(null)\n\n// Limit the size of this. Very low-sophistication LRU cache\nconst MAX = 10000\nconst cache = new Set<string>()\nexport const normalizeUnicode = (s: string): string => {\n if (!cache.has(s)) {\n // shake out identical accents and ligatures\n normalizeCache[s] = s\n .normalize('NFD')\n .toLocaleLowerCase('en')\n .toLocaleUpperCase('en')\n } else {\n cache.delete(s)\n }\n cache.add(s)\n\n const ret = normalizeCache[s] as string\n\n let i = cache.size - MAX\n // only prune when we're 10% over the max\n if (i > MAX / 10) {\n for (const s of cache) {\n cache.delete(s)\n delete normalizeCache[s]\n if (--i <= 0) break\n }\n }\n\n return ret\n}\n"]}

View File

@@ -0,0 +1,2 @@
export declare const normalizeWindowsPath: (p: string) => string;
//# sourceMappingURL=normalize-windows-path.d.ts.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"normalize-windows-path.d.ts","sourceRoot":"","sources":["../../src/normalize-windows-path.ts"],"names":[],"mappings":"AAQA,eAAO,MAAM,oBAAoB,MAEzB,MAAM,WAC+B,CAAA"}

View File

@@ -0,0 +1,12 @@
"use strict";
// on windows, either \ or / are valid directory separators.
// on unix, \ is a valid character in filenames.
// so, on windows, and only on windows, we replace all \ chars with /,
// so that we can use / as our one and only directory separator char.
Object.defineProperty(exports, "__esModule", { value: true });
exports.normalizeWindowsPath = void 0;
const platform = process.env.TESTING_TAR_FAKE_PLATFORM || process.platform;
exports.normalizeWindowsPath = platform !== 'win32' ?
(p) => p
: (p) => p && p.replace(/\\/g, '/');
//# sourceMappingURL=normalize-windows-path.js.map

View File

@@ -0,0 +1 @@
{"version":3,"file":"normalize-windows-path.js","sourceRoot":"","sources":["../../src/normalize-windows-path.ts"],"names":[],"mappings":";AAAA,4DAA4D;AAC5D,gDAAgD;AAChD,sEAAsE;AACtE,qEAAqE;;;AAErE,MAAM,QAAQ,GACZ,OAAO,CAAC,GAAG,CAAC,yBAAyB,IAAI,OAAO,CAAC,QAAQ,CAAA;AAE9C,QAAA,oBAAoB,GAC/B,QAAQ,KAAK,OAAO,CAAC,CAAC;IACpB,CAAC,CAAS,EAAE,EAAE,CAAC,CAAC;IAClB,CAAC,CAAC,CAAC,CAAS,EAAE,EAAE,CAAC,CAAC,IAAI,CAAC,CAAC,OAAO,CAAC,KAAK,EAAE,GAAG,CAAC,CAAA","sourcesContent":["// on windows, either \\ or / are valid directory separators.\n// on unix, \\ is a valid character in filenames.\n// so, on windows, and only on windows, we replace all \\ chars with /,\n// so that we can use / as our one and only directory separator char.\n\nconst platform =\n process.env.TESTING_TAR_FAKE_PLATFORM || process.platform\n\nexport const normalizeWindowsPath =\n platform !== 'win32' ?\n (p: string) => p\n : (p: string) => p && p.replace(/\\\\/g, '/')\n"]}

Some files were not shown because too many files have changed in this diff Show More