Helia
    Preparing search index...

    Interface AddOptions

    interface AddOptions {
        blockWriteConcurrency?: number;
        bufferImporter?: BufferImporter;
        chunker?: Chunker;
        chunkValidator?: ChunkValidator;
        cidVersion?: Version;
        dagBuilder?: DAGBuilder;
        fileImportConcurrency?: number;
        layout?: FileLayout;
        leafType?: "file" | "raw";
        onProgress?: (evt: AddEvents) => void;
        rawLeaves?: boolean;
        reduceSingleLeafToSelf?: boolean;
        shardFanoutBits?: number;
        shardSplitThresholdBytes?: number;
        signal?: AbortSignal;
        treeBuilder?: TreeBuilder;
        wrapWithDirectory?: boolean;
    }

    Hierarchy

    Index

    Properties

    blockWriteConcurrency?: number

    How many blocks to hash and write to the block store concurrently. For small numbers of large files this should be high (e.g. 50). Default: 50

    bufferImporter?: BufferImporter

    This option can be used to override the importer internals.

    This function should read Buffers from source and persist them using blockstore.put or similar entry is the { path, content } entry, where entry.content is an async generator that yields Buffers It should yield functions that return a Promise that resolves to an object with the properties { cid, unixfs, size } where cid is a [CID], unixfs is a [UnixFS] entry and size is a Number that represents the serialized size of the [IPLD] node that holds the buffer data. Values will be pulled from this generator in parallel - the amount of parallelisation is controlled by the blockWriteConcurrency option (default: 10)

    chunker?: Chunker

    The chunking strategy. See ./src/chunker/index.ts for available chunkers. Default: fixedSize

    chunkValidator?: ChunkValidator

    This option can be used to override the importer internals.

    This function takes input from the content field of imported entries. It should transform them into Buffers, throwing an error if it cannot. It should yield Buffer objects constructed from the source or throw an Error

    cidVersion?: Version

    the CID version to use when storing the data. Default: 1

    dagBuilder?: DAGBuilder

    This option can be used to override the importer internals.

    This function should read { path, content } entries from source and turn them into DAGs It should yield a function that returns a Promise that resolves to { cid, path, unixfs, node } where cid is a CID, path is a string, unixfs is a UnixFS entry and node is a DAGNode. Values will be pulled from this generator in parallel - the amount of parallelisation is controlled by the fileImportConcurrency option (default: 50)

    fileImportConcurrency?: number

    How many files to import concurrently. For large numbers of small files this should be high (e.g. 50). Default: 10

    layout?: FileLayout

    How the DAG that represents files are created. See ./src/layout/index.ts for available layouts. Default: balanced

    leafType?: "file" | "raw"

    What type of UnixFS node leaves should be - can be 'file' or 'raw' (ignored when rawLeaves is true).

    This option exists to simulate kubo's trickle dag which uses a combination of 'raw' UnixFS leaves and reduceSingleLeafToSelf: false.

    For modern code the rawLeaves: true option should be used instead so leaves are plain Uint8Arrays without a UnixFS/Protobuf wrapper.

    onProgress?: (evt: AddEvents) => void
    rawLeaves?: boolean

    When a file would span multiple DAGNodes, if this is true the leaf nodes will not be wrapped in UnixFS protobufs and will instead contain the raw file bytes. Default: true

    reduceSingleLeafToSelf?: boolean

    If the file being imported is small enough to fit into one DAGNodes, store the file data in the root node along with the UnixFS metadata instead of in a leaf node which would then require additional I/O to load. Default: true

    shardFanoutBits?: number

    The number of bits of a hash digest used at each level of sharding to the child index. 2**shardFanoutBits will dictate the maximum number of children for any shard in the HAMT. Default: 8

    shardSplitThresholdBytes?: number

    If the serialized node is larger than this it might be converted to a HAMT sharded directory. Default: 256KiB

    signal?: AbortSignal
    treeBuilder?: TreeBuilder

    This option can be used to override the importer internals.

    This function should read { cid, path, unixfs, node } entries from source and place them in a directory structure It should yield an object with the properties { cid, path, unixfs, size } where cid is a CID, path is a string, unixfs is a UnixFS entry and size is a Number.

    wrapWithDirectory?: boolean

    If true, all imported files and folders will be contained in a directory that will correspond to the CID of the final entry yielded. Default: false