Optional blockOptional bufferThis option can be used to override the importer internals.
This function should read Buffers from source and persist them using blockstore.put
or similar
entry is the { path, content } entry, where entry.content is an async
generator that yields Buffers
It should yield functions that return a Promise that resolves to an object with
the properties { cid, unixfs, size } where cid is a [CID], unixfs is a [UnixFS] entry and size is a Number that represents the serialized size of the [IPLD] node that holds the buffer data.
Values will be pulled from this generator in parallel - the amount of
parallelisation is controlled by the blockWriteConcurrency option (default: 10)
Optional chunkThis option can be used to override the importer internals.
This function takes input from the content field of imported entries.
It should transform them into Buffers, throwing an error if it cannot.
It should yield Buffer objects constructed from the source or throw an
Error
Optional chunkerThe chunking strategy. See ./src/chunker/index.ts for available chunkers. Default: fixedSize
Optional cidthe CID version to use when storing the data. Default: 1
Optional dagThis option can be used to override the importer internals.
This function should read { path, content } entries from source and turn them
into DAGs
It should yield a function that returns a Promise that resolves to
{ cid, path, unixfs, node } where cid is a CID, path is a string, unixfs
is a UnixFS entry and node is a DAGNode.
Values will be pulled from this generator in parallel - the amount of parallelisation
is controlled by the fileImportConcurrency option (default: 50)
Optional fileHow many files to import concurrently. For large numbers of small files this should be high (e.g. 50). Default: 10
Optional layoutHow the DAG that represents files are created. See ./src/layout/index.ts for available layouts. Default: balanced
Optional leafWhat type of UnixFS node leaves should be - can be 'file' or 'raw'
(ignored when rawLeaves is true).
This option exists to simulate kubo's trickle dag which uses a combination
of 'raw' UnixFS leaves and reduceSingleLeafToSelf: false.
For modern code the rawLeaves: true option should be used instead so leaves
are plain Uint8Arrays without a UnixFS/Protobuf wrapper.
Optional onOptional rawWhen a file would span multiple DAGNodes, if this is true the leaf nodes
will not be wrapped in UnixFS protobufs and will instead contain the
raw file bytes. Default: true
Optional reduceIf the file being imported is small enough to fit into one DAGNodes, store the file data in the root node along with the UnixFS metadata instead of in a leaf node which would then require additional I/O to load. Default: true
Optional shardThe number of bits of a hash digest used at each level of sharding to the child index. 2**shardFanoutBits will dictate the maximum number of children for any shard in the HAMT. Default: 8
Optional shardIf the serialized node is larger than this it might be converted to a HAMT sharded directory. Default: 256KiB
Optional signalOptional treeThis option can be used to override the importer internals.
This function should read { cid, path, unixfs, node } entries from source and
place them in a directory structure
It should yield an object with the properties { cid, path, unixfs, size } where
cid is a CID, path is a string, unixfs is a UnixFS entry and size is a Number.
Optional wrapIf true, all imported files and folders will be contained in a directory that will correspond to the CID of the final entry yielded. Default: false
Generated using TypeDoc
How many blocks to hash and write to the block store concurrently. For small numbers of large files this should be high (e.g. 50). Default: 50