OptionalblockOptionalbufferThis option can be used to override the importer internals.
This function should read Buffers from source and persist them using blockstore.put
or similar
entry is the { path, content } entry, where entry.content is an async
generator that yields Buffers
It should yield functions that return a Promise that resolves to an object with
the properties { cid, unixfs, size } where cid is a [CID], unixfs is a [UnixFS] entry and size is a Number that represents the serialized size of the [IPLD] node that holds the buffer data.
Values will be pulled from this generator in parallel - the amount of
parallelisation is controlled by the blockWriteConcurrency option (default: 10)
OptionalchunkerThe chunking strategy. See ./src/chunker/index.ts for available chunkers. Default: fixedSize
OptionalchunkThis option can be used to override the importer internals.
This function takes input from the content field of imported entries.
It should transform them into Buffers, throwing an error if it cannot.
It should yield Buffer objects constructed from the source or throw an
Error
Optionalcidthe CID version to use when storing the data. Default: 1
OptionaldagThis option can be used to override the importer internals.
This function should read { path, content } entries from source and turn them
into DAGs
It should yield a function that returns a Promise that resolves to
{ cid, path, unixfs, node } where cid is a CID, path is a string, unixfs
is a UnixFS entry and node is a DAGNode.
Values will be pulled from this generator in parallel - the amount of parallelisation
is controlled by the fileImportConcurrency option (default: 50)
OptionaldirThis option can be used to override how a directory IPLD node is built.
This function takes a Directory object and returns a Promise that resolves to an InProgressImportResult.
OptionalfileThis option can be used to override how a file IPLD node is built.
This function takes a File object and returns a Promise that resolves to an InProgressImportResult.
OptionalfileHow many files to import concurrently. For large numbers of small files this should be high (e.g. 50). Default: 10
OptionallayoutHow the DAG that represents files are created. See ./src/layout/index.ts for available layouts. Default: balanced
OptionalleafWhat type of UnixFS node leaves should be - can be 'file' or 'raw'
(ignored when rawLeaves is true).
This option exists to simulate kubo's trickle dag which uses a combination
of 'raw' UnixFS leaves and reduceSingleLeafToSelf: false.
For modern code the rawLeaves: true option should be used instead so leaves
are plain Uint8Arrays without a UnixFS/Protobuf wrapper.
OptionalonOptionalrawWhen a file would span multiple DAGNodes, if this is true the leaf nodes
will not be wrapped in UnixFS protobufs and will instead contain the
raw file bytes. Default: true
OptionalreduceIf the file being imported is small enough to fit into one DAGNodes, store the file data in the root node along with the UnixFS metadata instead of in a leaf node which would then require additional I/O to load. Default: true
OptionalshardThe number of bits of a hash digest used at each level of sharding to the child index. 2**shardFanoutBits will dictate the maximum number of children for any shard in the HAMT. Default: 8
OptionalshardIf the serialized node is larger than this it might be converted to a HAMT sharded directory. Default: 256KiB
OptionalsignalOptionaltreeThis option can be used to override the importer internals.
This function should read { cid, path, unixfs, node } entries from source and
place them in a directory structure
It should yield an object with the properties { cid, path, unixfs, size } where
cid is a CID, path is a string, unixfs is a UnixFS entry and size is a Number.
OptionalwrapIf true, all imported files and folders will be contained in a directory that will correspond to the CID of the final entry yielded. Default: false
How many blocks to hash and write to the block store concurrently. For small numbers of large files this should be high (e.g. 50). Default: 50