stores
    Preparing search index...

    Interface LevelBlockstoreInit

    interface LevelBlockstoreInit {
        base?: MultibaseCodec<string>;
        blockRestartInterval?: number;
        blockSize?: number;
        cacheSize?: number;
        compression?: boolean;
        createIfMissing?: boolean;
        errorIfExists?: boolean;
        keyEncoding?: string | PartialEncoding<string, string>;
        maxFileSize?: number;
        maxOpenFiles?: number;
        multithreading?: boolean;
        passive?: boolean;
        prefix?: string;
        valueEncoding?:
            | string
            | PartialEncoding<
                Uint8Array<ArrayBufferLike>,
                Uint8Array<ArrayBufferLike>,
            >;
        version?: string | number;
        writeBufferSize?: number;
    }

    Hierarchy

    Index

    Properties

    base?: MultibaseCodec<string>

    The multibase codec to use - nb. should be case insensitive. default: base32upper

    blockRestartInterval?: number

    The number of entries before restarting the "delta encoding" of keys within blocks. Each "restart" point stores the full key for the entry, between restarts, the common prefix of the keys for those entries is omitted. Restarts are similar to the concept of keyframes in video encoding and are used to minimise the amount of space required to store keys. This is particularly helpful when using deep namespacing / prefixing in your keys.

    16

    blockSize?: number

    The approximate size of the blocks that make up the table files. The size relates to uncompressed data (hence "approximate"). Blocks are indexed in the table file and entry-lookups involve reading an entire block and parsing to discover the required entry.

    4096

    cacheSize?: number

    The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.

    8 * 1024 * 1024

    compression?: boolean

    Unless set to false, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast so leave this on unless you have good reason to turn it off.

    true

    createIfMissing?: boolean

    If true, create an empty database if one doesn't already exist. If false and the database doesn't exist, opening will fail.

    true

    errorIfExists?: boolean

    If true and the database already exists, opening will fail.

    false

    keyEncoding?: string | PartialEncoding<string, string>

    Encoding to use for keys.

    'utf8'

    maxFileSize?: number

    The maximum amount of bytes to write to a file before switching to a new one. From LevelDB documentation:

    If your filesystem is more efficient with larger files, you could consider increasing the value. The downside will be longer compactions and hence longer latency / performance hiccups. Another reason to increase this parameter might be when you are initially populating a large database.

    2 * 1024 * 1024

    maxOpenFiles?: number

    The maximum number of files that LevelDB is allowed to have open at a time. If your database is likely to have a large working set, you may increase this value to prevent file descriptor churn. To calculate the number of files required for your working set, divide your total data size by maxFileSize.

    1000
    
    multithreading?: boolean

    Allows multi-threaded access to a single DB instance for sharing a DB across multiple worker threads within the same process.

    false

    passive?: boolean

    Wait for, but do not initiate, opening of the database.

    false

    prefix?: string

    Prefix for the IDBDatabase name. Can be set to an empty string.

    'level-js-'

    valueEncoding?:
        | string
        | PartialEncoding<Uint8Array<ArrayBufferLike>, Uint8Array<ArrayBufferLike>>

    Encoding to use for values.

    'utf8'

    version?: string | number

    The version to open the IDBDatabase with.

    1

    writeBufferSize?: number

    The maximum size (in bytes) of the log (in memory and stored in the .log file on disk). Beyond this size, LevelDB will convert the log data to the first level of sorted table files. From LevelDB documentation:

    Larger values increase performance, especially during bulk loads. Up to two write buffers may be held in memory at the same time, so you may wish to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened.

    4 * 1024 * 1024