Abstract\n\nThis package enables users to store, query, and delete\na large number of key-value pairs on disk.
\n\nThis is especially useful when the data cannot fit into RAM.\nIf you have hundreds of GBs or many TBs of key-value data to store\nand query from, this is the package for you.
\n\nInstallation
\n\nThis package is built for macOS (x86/arm), Windows 64/32, and Linux x86/arm.\nIt can be installed from pypi with pip install rocksdict
.
\n\nIntroduction
\n\nBelow is a code example that shows how to do the following:
\n\n\n- Create Rdict
\n- Store something on disk
\n- Close Rdict
\n- Open Rdict again
\n- Check Rdict elements
\n- Iterate from Rdict
\n- Batch get
\n- Delete storage
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options\n\npath = str(\"./test_dict\")\n\n# create a Rdict with default options at `path`\ndb = Rdict(path)\n\n# storing numbers\ndb[1.0] = 1\ndb[1] = 1.0\ndb[\"huge integer\"] = 2343546543243564534233536434567543\ndb[\"good\"] = True\ndb[\"bad\"] = False\ndb[\"bytes\"] = b\"bytes\"\ndb[\"this is a list\"] = [1, 2, 3]\ndb[\"store a dict\"] = {0: 1}\n\n# for example numpy array\nimport numpy as np\nimport pandas as pd\ndb[b\"numpy\"] = np.array([1, 2, 3])\ndb[\"a table\"] = pd.DataFrame({\"a\": [1, 2], \"b\": [2, 1]})\n\n# close Rdict\ndb.close()\n\n# reopen Rdict from disk\ndb = Rdict(path)\nassert db[1.0] == 1\nassert db[1] == 1.0\nassert db[\"huge integer\"] == 2343546543243564534233536434567543\nassert db[\"good\"] == True\nassert db[\"bad\"] == False\nassert db[\"bytes\"] == b\"bytes\"\nassert db[\"this is a list\"] == [1, 2, 3]\nassert db[\"store a dict\"] == {0: 1}\nassert np.all(db[b\"numpy\"] == np.array([1, 2, 3]))\nassert np.all(db[\"a table\"] == pd.DataFrame({\"a\": [1, 2], \"b\": [2, 1]}))\n\n# iterate through all elements\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# batch get:\nprint(db[[\"good\", \"bad\", 1.0]])\n# [True, False, 1]\n\n# delete Rdict from dict\ndb.close()\nRdict.destroy(path)\n
\n
\n\nSupported types:
\n\n\n- key:
int, float, bool, str, bytes
\n- value:
int, float, bool, str, bytes
and anything that\nsupports pickle
. \n
\n"}, {"fullname": "rocksdict.Rdict", "modulename": "rocksdict", "qualname": "Rdict", "kind": "class", "doc": "A persistent on-disk dictionary. Supports string, int, float, bytes as key, values.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"./test_dir\")\ndb[0] = 1\n\ndb = None\ndb = Rdict(\"./test_dir\")\nassert(db[0] == 1)\n
\n
\n\nArguments:
\n\n\n- path (str): path to the database
\n- options (Options): Options object
\n- column_families (dict): (name, options) pairs, these
Options
\nmust have the same raw_mode
argument as the main Options
.\nA column family called 'default' is always created. \n- access_type (AccessType): there are four access types:\nReadWrite, ReadOnly, WithTTL, and Secondary, use\nAccessType class to create.
\n
\n"}, {"fullname": "rocksdict.Rdict.__init__", "modulename": "rocksdict", "qualname": "Rdict.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.Rdict.set_dumps", "modulename": "rocksdict", "qualname": "Rdict.set_dumps", "kind": "function", "doc": "set custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_loads", "modulename": "rocksdict", "qualname": "Rdict.set_loads", "kind": "function", "doc": "set custom loads function
\n", "signature": "(self, /, loads):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_write_options", "modulename": "rocksdict", "qualname": "Rdict.set_write_options", "kind": "function", "doc": "Optionally disable WAL or sync for this write.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, WriteBatch, WriteOptions\n\npath = \"_path_for_rocksdb_storageY1\"\ndb = Rdict(path)\n\n# set write options\nwrite_options = WriteOptions()\nwrite_options.set_sync(False)\nwrite_options.disable_wal(True)\ndb.set_write_options(write_options)\n\n# write to db\ndb[\"my key\"] = \"my value\"\ndb[\"key2\"] = \"value2\"\ndb[\"key3\"] = \"value3\"\n\n# remove db\ndel db\nRdict.destroy(path)\n
\n
\n", "signature": "(self, /, write_opt):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_read_options", "modulename": "rocksdict", "qualname": "Rdict.set_read_options", "kind": "function", "doc": "Configure Read Options for all the get operations.
\n", "signature": "(self, /, read_opt):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get", "modulename": "rocksdict", "qualname": "Rdict.get", "kind": "function", "doc": "Get value from key.
\n\nArguments:
\n\n\n- key: the key or list of keys.
\n- default: the default value to return if key not found.
\n- read_opt: override preset read options\n(or use Rdict.set_read_options to preset a read options used by default).
\n
\n\nReturns:
\n\n\n None or default value if the key does not exist.
\n
\n", "signature": "(self, /, key, default=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.put", "modulename": "rocksdict", "qualname": "Rdict.put", "kind": "function", "doc": "Insert key value into database.
\n\nArguments:
\n\n\n- key: the key.
\n- value: the value.
\n- write_opt: override preset write options\n(or use Rdict.set_write_options to preset a write options used by default).
\n
\n", "signature": "(self, /, key, value, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.key_may_exist", "modulename": "rocksdict", "qualname": "Rdict.key_may_exist", "kind": "function", "doc": "Check if a key may exist without doing any IO.
\n\nNotes:
\n\n\n If the key definitely does not exist in the database,\n then this method returns False, else True.\n If the caller wants to obtain value when the key is found in memory,\n fetch should be set to True.\n This check is potentially lighter-weight than invoking DB::get().\n One way to make this lighter weight is to avoid doing any IOs.
\n \n The API follows the following principle:
\n \n \n - True, and value found => the key must exist.
\n - True => the key may or may not exist.
\n - False => the key definitely does not exist.
\n
\n \n Flip it around:
\n \n \n - key exists => must return True, but value may or may not be found.
\n - key doesn't exists => might still return True.
\n
\n
\n\nArguments:
\n\n\n- key: Key to check
\n- read_opt: ReadOptions
\n
\n\nReturns:
\n\n\n if fetch = False
,\n returning True implies that the key may exist.\n returning False implies that the key definitely does not exist.\n if fetch = True
,\n returning (True, value) implies that the key is found and definitely exist.\n returning (False, None) implies that the key definitely does not exist.\n returning (True, None) implies that the key may exist.
\n
\n", "signature": "(self, /, key, fetch=False, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.delete", "modulename": "rocksdict", "qualname": "Rdict.delete", "kind": "function", "doc": "Delete entry from the database.
\n\nArguments:
\n\n\n- key: the key.
\n- write_opt: override preset write options\n(or use Rdict.set_write_options to preset a write options used by default).
\n
\n", "signature": "(self, /, key, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.iter", "modulename": "rocksdict", "qualname": "Rdict.iter", "kind": "function", "doc": "Reversible for iterating over keys and values.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage5\"\ndb = Rdict(path)\n\nfor i in range(50):\n db[i] = i ** 2\n\niter = db.iter()\n\niter.seek_to_first()\n\nj = 0\nwhile iter.valid():\n assert iter.key() == j\n assert iter.value() == j ** 2\n print(f\"{iter.key()} {iter.value()}\")\n iter.next()\n j += 1\n\niter.seek_to_first();\nassert iter.key() == 0\nassert iter.value() == 0\nprint(f\"{iter.key()} {iter.value()}\")\n\niter.seek(25)\nassert iter.key() == 25\nassert iter.value() == 625\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path)\n
\n
\n\nArguments:
\n\n\n- read_opt: ReadOptions
\n
\n\nReturns: Reversible
\n", "signature": "(self, /, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.items", "modulename": "rocksdict", "qualname": "Rdict.items", "kind": "function", "doc": "Iterate through all keys and values pairs.
\n\nExamples:
\n\n\n ::
\n\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions
\n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.keys", "modulename": "rocksdict", "qualname": "Rdict.keys", "kind": "function", "doc": "Iterate through all keys
\n\nExamples:
\n\n\n ::
\n\nall_keys = [k for k in db.keys()]\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions
\n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.values", "modulename": "rocksdict", "qualname": "Rdict.values", "kind": "function", "doc": "Iterate through all values.
\n\nExamples:
\n\n\n ::
\n\nall_keys = [v for v in db.values()]\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.flush", "modulename": "rocksdict", "qualname": "Rdict.flush", "kind": "function", "doc": "Manually flush the current column family.
\n\nNotes:
\n\n\n Manually call mem-table flush.\n It is recommended to call flush() or close() before\n stopping the python program, to ensure that all written\n key-value pairs have been flushed to the disk.
\n
\n\nArguments:
\n\n\n- wait (bool): whether to wait for the flush to finish.
\n
\n", "signature": "(self, /, wait=True):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.flush_wal", "modulename": "rocksdict", "qualname": "Rdict.flush_wal", "kind": "function", "doc": "Flushes the WAL buffer. If sync
is set to true
, also syncs\nthe data to disk.
\n", "signature": "(self, /, sync=True):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.create_column_family", "modulename": "rocksdict", "qualname": "Rdict.create_column_family", "kind": "function", "doc": "Creates column family with given name and options.
\n\nArguments:
\n\n\n- name: name of this column family
\n- options: Rdict Options for this column family
\n
\n\nReturn:
\n\n\n the newly created column family
\n
\n", "signature": "(self, /, name, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.drop_column_family", "modulename": "rocksdict", "qualname": "Rdict.drop_column_family", "kind": "function", "doc": "Drops the column family with the given name
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get_column_family", "modulename": "rocksdict", "qualname": "Rdict.get_column_family", "kind": "function", "doc": "Get a column family Rdict
\n\nArguments:
\n\n\n- name: name of this column family
\n- options: Rdict Options for this column family
\n
\n\nReturn:
\n\n\n the column family Rdict of this name
\n
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get_column_family_handle", "modulename": "rocksdict", "qualname": "Rdict.get_column_family_handle", "kind": "function", "doc": "Use this method to obtain a ColumnFamily instance, which can be used in WriteBatch.
\n\nExample:
\n\n\n ::
\n\nwb = WriteBatch()\nfor i in range(100):\n wb.put(i, i**2, db.get_column_family_handle(cf_name_1))\ndb.write(wb)\n\nwb = WriteBatch()\nwb.set_default_column_family(db.get_column_family_handle(cf_name_2))\nfor i in range(100, 200):\n wb[i] = i**2\ndb.write(wb)\n
\n
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.snapshot", "modulename": "rocksdict", "qualname": "Rdict.snapshot", "kind": "function", "doc": "A snapshot of the current column family.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"tmp\")\nfor i in range(100):\n db[i] = i\n\n# take a snapshot\nsnapshot = db.snapshot()\n\nfor i in range(90):\n del db[i]\n\n# 0-89 are no longer in db\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# but they are still in the snapshot\nfor i in range(100):\n assert snapshot[i] == i\n\n# drop the snapshot\ndel snapshot, db\n\nRdict.destroy(\"tmp\")\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.ingest_external_file", "modulename": "rocksdict", "qualname": "Rdict.ingest_external_file", "kind": "function", "doc": "Loads a list of external SST files created with SstFileWriter\ninto the current column family.
\n\nArguments:
\n\n\n- paths: a list a paths
\n- opts: IngestExternalFileOptionsPy instance
\n
\n", "signature": "(self, /, paths, opts=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.try_catch_up_with_primary", "modulename": "rocksdict", "qualname": "Rdict.try_catch_up_with_primary", "kind": "function", "doc": "Tries to catch up with the primary by reading as much as possible from the\nlog files.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.cancel_all_background", "modulename": "rocksdict", "qualname": "Rdict.cancel_all_background", "kind": "function", "doc": "Request stopping background work, if wait is true wait until it's done.
\n", "signature": "(self, /, wait):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.write", "modulename": "rocksdict", "qualname": "Rdict.write", "kind": "function", "doc": "WriteBatch
\n\nNotes:
\n\n\n This WriteBatch does not write to the current column family.
\n
\n\nArguments:
\n\n\n- write_batch: WriteBatch instance. This instance will be consumed.
\n- write_opt: use default value if not provided.
\n
\n", "signature": "(self, /, write_batch, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.delete_range", "modulename": "rocksdict", "qualname": "Rdict.delete_range", "kind": "function", "doc": "Removes the database entries in the range [\"from\", \"to\")
of the current column family.
\n\nArguments:
\n\n\n- begin: included
\n- end: excluded
\n- write_opt: WriteOptions
\n
\n", "signature": "(self, /, begin, end, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.close", "modulename": "rocksdict", "qualname": "Rdict.close", "kind": "function", "doc": "Flush memory to disk, and drop the current column family.
\n\nNotes:
\n\n\n Calling db.close()
is nearly equivalent to first calling\n db.flush()
and then del db
. However, db.close()
does\n not guarantee the underlying RocksDB to be actually closed.\n Other Column Family Rdict
instances, ColumnFamily
\n (cf handle) instances, iterator instances such asRdictIter
,\n RdictItems
, RdictKeys
, RdictValues
can all keep RocksDB\n alive. del
all associated instances mentioned above\n to actually shut down RocksDB.
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.path", "modulename": "rocksdict", "qualname": "Rdict.path", "kind": "function", "doc": "Return current database path.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.compact_range", "modulename": "rocksdict", "qualname": "Rdict.compact_range", "kind": "function", "doc": "Runs a manual compaction on the Range of keys given for the current Column Family.
\n", "signature": "(self, /, begin, end, compact_opt=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_options", "modulename": "rocksdict", "qualname": "Rdict.set_options", "kind": "function", "doc": "Set options for the current column family.
\n", "signature": "(self, /, options):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.property_value", "modulename": "rocksdict", "qualname": "Rdict.property_value", "kind": "function", "doc": "Retrieves a RocksDB property by name, for the current column family.
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.property_int_value", "modulename": "rocksdict", "qualname": "Rdict.property_int_value", "kind": "function", "doc": "Retrieves a RocksDB property and casts it to an integer\n(for the current column family).
\n\nFull list of properties that return int values could be find\nhere.
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.latest_sequence_number", "modulename": "rocksdict", "qualname": "Rdict.latest_sequence_number", "kind": "function", "doc": "The sequence number of the most recent transaction.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.live_files", "modulename": "rocksdict", "qualname": "Rdict.live_files", "kind": "function", "doc": "Returns a list of all table files with their level, start key and end key
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.destroy", "modulename": "rocksdict", "qualname": "Rdict.destroy", "kind": "function", "doc": "Delete the database.
\n\nArguments:
\n\n\n- path (str): path to this database
\n- options (rocksdict.Options): Rocksdb options object
\n
\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.repair", "modulename": "rocksdict", "qualname": "Rdict.repair", "kind": "function", "doc": "Repair the database.
\n\nArguments:
\n\n\n- path (str): path to this database
\n- options (rocksdict.Options): Rocksdb options object
\n
\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.list_cf", "modulename": "rocksdict", "qualname": "Rdict.list_cf", "kind": "function", "doc": "\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch", "modulename": "rocksdict", "qualname": "WriteBatch", "kind": "class", "doc": "WriteBatch class. Use db.write() to ingest WriteBatch.
\n\nNotes:
\n\n\n A WriteBatch instance can only be ingested once,\n otherwise an Exception will be raised.
\n
\n\nArguments:
\n\n\n- raw_mode (bool): make sure that this is consistent with the Rdict.
\n
\n"}, {"fullname": "rocksdict.WriteBatch.__init__", "modulename": "rocksdict", "qualname": "WriteBatch.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.WriteBatch.set_dumps", "modulename": "rocksdict", "qualname": "WriteBatch.set_dumps", "kind": "function", "doc": "change to a custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.set_default_column_family", "modulename": "rocksdict", "qualname": "WriteBatch.set_default_column_family", "kind": "function", "doc": "Set the default item for a[i] = j
and del a[i]
syntax.
\n\nYou can also use put(key, value, column_family)
to explicitly choose column family.
\n\nArguments:
\n\n\n- - column_family (ColumnFamily | None): column family descriptor or None (for default family).
\n
\n", "signature": "(self, /, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.len", "modulename": "rocksdict", "qualname": "WriteBatch.len", "kind": "function", "doc": "length of the batch
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.size_in_bytes", "modulename": "rocksdict", "qualname": "WriteBatch.size_in_bytes", "kind": "function", "doc": "Return WriteBatch serialized size (in bytes).
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.is_empty", "modulename": "rocksdict", "qualname": "WriteBatch.is_empty", "kind": "function", "doc": "Check whether the batch is empty.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.put", "modulename": "rocksdict", "qualname": "WriteBatch.put", "kind": "function", "doc": "Insert a value into the database under the given key.
\n\nArguments:
\n\n\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, key, value, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.delete", "modulename": "rocksdict", "qualname": "WriteBatch.delete", "kind": "function", "doc": "Removes the database entry for key. Does nothing if the key was not found.
\n\nArguments:
\n\n\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, key, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.delete_range", "modulename": "rocksdict", "qualname": "WriteBatch.delete_range", "kind": "function", "doc": "Remove database entries in column family from start key to end key.
\n\nNotes:
\n\n\n Removes the database entries in the range [\"begin_key\", \"end_key\"), i.e.,\n including \"begin_key\" and excluding \"end_key\". It is not an error if no\n keys exist in the range [\"begin_key\", \"end_key\").
\n
\n\nArguments:
\n\n\n- begin: begin key
\n- end: end key
\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, begin, end, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.clear", "modulename": "rocksdict", "qualname": "WriteBatch.clear", "kind": "function", "doc": "Clear all updates buffered in this batch.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter", "modulename": "rocksdict", "qualname": "SstFileWriter", "kind": "class", "doc": "SstFileWriter is used to create sst files that can be added to database later\nAll keys in files generated by SstFileWriter will have sequence number = 0.
\n\nArguments:
\n\n\n- options: this options must have the same
raw_mode
as the Rdict DB. \n
\n"}, {"fullname": "rocksdict.SstFileWriter.__init__", "modulename": "rocksdict", "qualname": "SstFileWriter.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.SstFileWriter.set_dumps", "modulename": "rocksdict", "qualname": "SstFileWriter.set_dumps", "kind": "function", "doc": "set custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.open", "modulename": "rocksdict", "qualname": "SstFileWriter.open", "kind": "function", "doc": "Prepare SstFileWriter to write into file located at \"file_path\".
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.finish", "modulename": "rocksdict", "qualname": "SstFileWriter.finish", "kind": "function", "doc": "Finalize writing to sst file and close file.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.file_size", "modulename": "rocksdict", "qualname": "SstFileWriter.file_size", "kind": "function", "doc": "returns the current file size
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType", "modulename": "rocksdict", "qualname": "AccessType", "kind": "class", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n"}, {"fullname": "rocksdict.AccessType.__init__", "modulename": "rocksdict", "qualname": "AccessType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.AccessType.read_write", "modulename": "rocksdict", "qualname": "AccessType.read_write", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.read_only", "modulename": "rocksdict", "qualname": "AccessType.read_only", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(error_if_log_file_exist=False):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.secondary", "modulename": "rocksdict", "qualname": "AccessType.secondary", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(secondary_path):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.with_ttl", "modulename": "rocksdict", "qualname": "AccessType.with_ttl", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(duration):", "funcdef": "def"}, {"fullname": "rocksdict.WriteOptions", "modulename": "rocksdict", "qualname": "WriteOptions", "kind": "class", "doc": "Optionally disable WAL or sync for this write.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, WriteBatch, WriteOptions\n\npath = \"_path_for_rocksdb_storageY1\"\ndb = Rdict(path, Options())\n\n# set write options\nwrite_options = WriteOptions()\nwrite_options.set_sync(false)\nwrite_options.disable_wal(true)\ndb.set_write_options(write_options)\n\n# write to db\ndb[\"my key\"] = \"my value\"\ndb[\"key2\"] = \"value2\"\ndb[\"key3\"] = \"value3\"\n\n# remove db\ndel db\nRdict.destroy(path, Options())\n
\n
\n"}, {"fullname": "rocksdict.WriteOptions.__init__", "modulename": "rocksdict", "qualname": "WriteOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.WriteOptions.sync", "modulename": "rocksdict", "qualname": "WriteOptions.sync", "kind": "variable", "doc": "Sets the sync mode. If true, the write will be flushed\nfrom the operating system buffer cache before the write is considered complete.\nIf this flag is true, writes will be slower.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.disable_wal", "modulename": "rocksdict", "qualname": "WriteOptions.disable_wal", "kind": "variable", "doc": "Sets whether WAL should be active or not.\nIf true, writes will not first go to the write ahead log,\nand the write may got lost after a crash.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.ignore_missing_column_families", "modulename": "rocksdict", "qualname": "WriteOptions.ignore_missing_column_families", "kind": "variable", "doc": "If true and if user is trying to write to column families that don't exist (they were dropped),\nignore the write (don't return an error). If there are multiple writes in a WriteBatch,\nother writes will succeed.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.low_pri", "modulename": "rocksdict", "qualname": "WriteOptions.low_pri", "kind": "variable", "doc": "If true, this write request is of lower priority if compaction is\nbehind. In this case, no_slowdown = true, the request will be cancelled\nimmediately with Status::Incomplete() returned. Otherwise, it will be\nslowed down. The slowdown value is determined by RocksDB to guarantee\nit introduces minimum impacts to high priority writes.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.no_slowdown", "modulename": "rocksdict", "qualname": "WriteOptions.no_slowdown", "kind": "variable", "doc": "If true and we need to wait or sleep for the write request, fails\nimmediately with Status::Incomplete().
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.memtable_insert_hint_per_batch", "modulename": "rocksdict", "qualname": "WriteOptions.memtable_insert_hint_per_batch", "kind": "variable", "doc": "If true, writebatch will maintain the last insert positions of each\nmemtable as hints in concurrent write. It can improve write performance\nin concurrent writes if keys in one writebatch are sequential. In\nnon-concurrent writes (when concurrent_memtable_writes is false) this\noption will be ignored.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.Snapshot", "modulename": "rocksdict", "qualname": "Snapshot", "kind": "class", "doc": "A consistent view of the database at the point of creation.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"tmp\")\nfor i in range(100):\n db[i] = i\n\n# take a snapshot\nsnapshot = db.snapshot()\n\nfor i in range(90):\n del db[i]\n\n# 0-89 are no longer in db\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# but they are still in the snapshot\nfor i in range(100):\n assert snapshot[i] == i\n\n# drop the snapshot\ndel snapshot, db\n\nRdict.destroy(\"tmp\")\n
\n
\n"}, {"fullname": "rocksdict.Snapshot.__init__", "modulename": "rocksdict", "qualname": "Snapshot.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.Snapshot.iter", "modulename": "rocksdict", "qualname": "Snapshot.iter", "kind": "function", "doc": "Creates an iterator over the data in this snapshot under the given column family, using\nthe default read options.
\n\nArguments:
\n\n\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.items", "modulename": "rocksdict", "qualname": "Snapshot.items", "kind": "function", "doc": "Iterate through all keys and values pairs.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.keys", "modulename": "rocksdict", "qualname": "Snapshot.keys", "kind": "function", "doc": "Iterate through all keys.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.values", "modulename": "rocksdict", "qualname": "Snapshot.values", "kind": "function", "doc": "Iterate through all values.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter", "modulename": "rocksdict", "qualname": "RdictIter", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.RdictIter.__init__", "modulename": "rocksdict", "qualname": "RdictIter.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.RdictIter.valid", "modulename": "rocksdict", "qualname": "RdictIter.valid", "kind": "function", "doc": "Returns true
if the iterator is valid. An iterator is invalidated when\nit reaches the end of its defined range, or when it encounters an error.
\n\nTo check whether the iterator encountered an error after valid
has\nreturned false
, use the status
method. status
will never\nreturn an error when valid
is true
.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.status", "modulename": "rocksdict", "qualname": "RdictIter.status", "kind": "function", "doc": "Returns an error Result
if the iterator has encountered an error\nduring operation. When an error is encountered, the iterator is\ninvalidated and valid
will return false
when called.
\n\nPerforming a seek will discard the current status.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_to_first", "modulename": "rocksdict", "qualname": "RdictIter.seek_to_first", "kind": "function", "doc": "Seeks to the first key in the database.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage5\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Iterate all keys from the start in lexicographic order\niter.seek_to_first()\n\nwhile iter.valid():\n print(f\"{iter.key()} {iter.value()}\")\n iter.next()\n\n# Read just the first key\niter.seek_to_first();\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_to_last", "modulename": "rocksdict", "qualname": "RdictIter.seek_to_last", "kind": "function", "doc": "Seeks to the last key in the database.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Iterate all keys from the start in lexicographic order\niter.seek_to_last()\n\nwhile iter.valid():\n print(f\"{iter.key()} {iter.value()}\")\n iter.prev()\n\n# Read just the last key\niter.seek_to_last();\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek", "modulename": "rocksdict", "qualname": "RdictIter.seek", "kind": "function", "doc": "Seeks to the specified key or the first key that lexicographically follows it.
\n\nThis method will attempt to seek to the specified key. If that key does not exist, it will\nfind and seek to the key that lexicographically follows it instead.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Read the first string key that starts with 'a'\niter.seek(\"a\");\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_for_prev", "modulename": "rocksdict", "qualname": "RdictIter.seek_for_prev", "kind": "function", "doc": "Seeks to the specified key, or the first key that lexicographically precedes it.
\n\nLike .seek()
this method will attempt to seek to the specified key.\nThe difference with .seek()
is that if the specified key do not exist, this method will\nseek to key that lexicographically precedes it instead.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Read the last key that starts with 'a'\nseek_for_prev(\"b\")\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.next", "modulename": "rocksdict", "qualname": "RdictIter.next", "kind": "function", "doc": "Seeks to the next key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.prev", "modulename": "rocksdict", "qualname": "RdictIter.prev", "kind": "function", "doc": "Seeks to the previous key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.key", "modulename": "rocksdict", "qualname": "RdictIter.key", "kind": "function", "doc": "Returns the current key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.value", "modulename": "rocksdict", "qualname": "RdictIter.value", "kind": "function", "doc": "Returns the current value.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options", "modulename": "rocksdict", "qualname": "Options", "kind": "class", "doc": "Database-wide options around performance and behavior.
\n\nPlease read the official tuning guide\nand most importantly, measure performance under realistic workloads with realistic hardware.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, Rdict, DBCompactionStyle\n\ndef badly_tuned_for_somebody_elses_disk():\n\n path = \"path/for/rocksdb/storageX\"\n\n opts = Options()\n opts.create_if_missing(true)\n opts.set_max_open_files(10000)\n opts.set_use_fsync(false)\n opts.set_bytes_per_sync(8388608)\n opts.optimize_for_point_lookup(1024)\n opts.set_table_cache_num_shard_bits(6)\n opts.set_max_write_buffer_number(32)\n opts.set_write_buffer_size(536870912)\n opts.set_target_file_size_base(1073741824)\n opts.set_min_write_buffer_number_to_merge(4)\n opts.set_level_zero_stop_writes_trigger(2000)\n opts.set_level_zero_slowdown_writes_trigger(0)\n opts.set_compaction_style(DBCompactionStyle.universal())\n opts.set_disable_auto_compactions(true)\n\n return Rdict(path, opts)\n
\n
\n\nArguments:
\n\n\n- raw_mode (bool): set this to True to operate in raw mode (i.e.\nit will only allow bytes as key-value pairs, and is compatible\nwith other RockDB database).
\n
\n"}, {"fullname": "rocksdict.Options.__init__", "modulename": "rocksdict", "qualname": "Options.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.Options.load_latest", "modulename": "rocksdict", "qualname": "Options.load_latest", "kind": "function", "doc": "Load latest options from the rocksdb path
\n\nReturns a tuple, where the first item is Options
\nand the second item is a Dict
of column families.
\n", "signature": "(path, env=Ellipsis, ignore_unknown_options=False, cache=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Options.increase_parallelism", "modulename": "rocksdict", "qualname": "Options.increase_parallelism", "kind": "function", "doc": "By default, RocksDB uses only one background thread for flush and\ncompaction. Calling this function will set it up such that total of\ntotal_threads
is used. Good value for total_threads
is the number of\ncores. You almost definitely want to call this function if your system is\nbottlenecked by RocksDB.
\n", "signature": "(self, /, parallelism):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_level_style_compaction", "modulename": "rocksdict", "qualname": "Options.optimize_level_style_compaction", "kind": "function", "doc": "Optimize level style compaction.
\n\nDefault values for some parameters in Options
are not optimized for heavy\nworkloads and big datasets, which means you might observe write stalls under\nsome conditions.
\n\nThis can be used as one of the starting points for tuning RocksDB options in\nsuch cases.
\n\nInternally, it sets write_buffer_size
, min_write_buffer_number_to_merge
,\nmax_write_buffer_number
, level0_file_num_compaction_trigger
,\ntarget_file_size_base
, max_bytes_for_level_base
, so it can override if those\nparameters were set before.
\n\nIt sets buffer sizes so that memory consumption would be constrained by\nmemtable_memory_budget
.
\n", "signature": "(self, /, memtable_memory_budget):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_universal_style_compaction", "modulename": "rocksdict", "qualname": "Options.optimize_universal_style_compaction", "kind": "function", "doc": "Optimize universal style compaction.
\n\nDefault values for some parameters in Options
are not optimized for heavy\nworkloads and big datasets, which means you might observe write stalls under\nsome conditions.
\n\nThis can be used as one of the starting points for tuning RocksDB options in\nsuch cases.
\n\nInternally, it sets write_buffer_size
, min_write_buffer_number_to_merge
,\nmax_write_buffer_number
, level0_file_num_compaction_trigger
,\ntarget_file_size_base
, max_bytes_for_level_base
, so it can override if those\nparameters were set before.
\n\nIt sets buffer sizes so that memory consumption would be constrained by\nmemtable_memory_budget
.
\n", "signature": "(self, /, memtable_memory_budget):", "funcdef": "def"}, {"fullname": "rocksdict.Options.create_if_missing", "modulename": "rocksdict", "qualname": "Options.create_if_missing", "kind": "function", "doc": "If true, any column families that didn't exist when opening the database\nwill be created.
\n\nDefault: true
\n", "signature": "(self, /, create_if_missing):", "funcdef": "def"}, {"fullname": "rocksdict.Options.create_missing_column_families", "modulename": "rocksdict", "qualname": "Options.create_missing_column_families", "kind": "function", "doc": "If true, any column families that didn't exist when opening the database\nwill be created.
\n\nDefault: false
\n", "signature": "(self, /, create_missing_cfs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_error_if_exists", "modulename": "rocksdict", "qualname": "Options.set_error_if_exists", "kind": "function", "doc": "Specifies whether an error should be raised if the database already exists.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_paranoid_checks", "modulename": "rocksdict", "qualname": "Options.set_paranoid_checks", "kind": "function", "doc": "Enable/disable paranoid checks.
\n\nIf true, the implementation will do aggressive checking of the\ndata it is processing and will stop early if it detects any\nerrors. This may have unforeseen ramifications: for example, a\ncorruption of one DB entry may cause a large number of entries to\nbecome unreadable or for the entire DB to become unopenable.\nIf any of the writes to the database fails (Put, Delete, Merge, Write),\nthe database will switch to read-only mode and fail all other\nWrite operations.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_paths", "modulename": "rocksdict", "qualname": "Options.set_db_paths", "kind": "function", "doc": "A list of paths where SST files can be put into, with its target size.\nNewer data is placed into paths specified earlier in the vector while\nolder data gradually moves to paths specified later in the vector.
\n\nFor example, you have a flash device with 10GB allocated for the DB,\nas well as a hard drive of 2TB, you should config it to be:\n [{\"/flash_path\", 10GB}, {\"/hard_drive\", 2TB}]
\n\nThe system will try to guarantee data under each path is close to but\nnot larger than the target size. But current and future file sizes used\nby determining where to place a file are based on best-effort estimation,\nwhich means there is a chance that the actual size under the directory\nis slightly more than target size under some workloads. User should give\nsome buffer room for those cases.
\n\nIf none of the paths has sufficient room to place a file, the file will\nbe placed to the last path anyway, despite to the target size.
\n\nPlacing newer data to earlier paths is also best-efforts. User should\nexpect user files to be placed in higher levels in some extreme cases.
\n\nIf left empty, only one path will be used, which is path
passed when\nopening the DB.
\n\nDefault: empty
\n\nfrom rocksdict import Options, DBPath\n\nopt = Options()\nflash_path = DBPath(\"/flash_path\", 10 * 1024 * 1024 * 1024) # 10 GB\nhard_drive = DBPath(\"/hard_drive\", 2 * 1024 * 1024 * 1024 * 1024) # 2 TB\nopt.set_db_paths([flash_path, hard_drive])\n
\n", "signature": "(self, /, paths):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_env", "modulename": "rocksdict", "qualname": "Options.set_env", "kind": "function", "doc": "Use the specified object to interact with the environment,\ne.g. to read/write files, schedule background work, etc. In the near\nfuture, support for doing storage operations such as read/write files\nthrough env will be deprecated in favor of file_system.
\n", "signature": "(self, /, env):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_type", "modulename": "rocksdict", "qualname": "Options.set_compression_type", "kind": "function", "doc": "Sets the compression algorithm that will be used for compressing blocks.
\n\nDefault: DBCompressionType::Snappy
(DBCompressionType::None
if\nsnappy feature is not enabled).
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, DBCompressionType\n\nopts = Options()\nopts.set_compression_type(DBCompressionType.snappy())\n
\n
\n", "signature": "(self, /, t):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_per_level", "modulename": "rocksdict", "qualname": "Options.set_compression_per_level", "kind": "function", "doc": "Different levels can have different compression policies. There\nare cases where most lower levels would like to use quick compression\nalgorithms while the higher levels (which have more data) use\ncompression algorithms that have better compression but could\nbe slower. This array, if non-empty, should have an entry for\neach level of the database; these override the value specified in\nthe previous field 'compression'.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, DBCompressionType\n\nopts = Options()\nopts.set_compression_per_level([\n DBCompressionType.none(),\n DBCompressionType.none(),\n DBCompressionType.snappy(),\n DBCompressionType.snappy(),\n DBCompressionType.snappy()\n])\n
\n
\n", "signature": "(self, /, level_types):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_options", "modulename": "rocksdict", "qualname": "Options.set_compression_options", "kind": "function", "doc": "Maximum size of dictionaries used to prime the compression library.\nEnabling dictionary can improve compression ratios when there are\nrepetitions across data blocks.
\n\nThe dictionary is created by sampling the SST file data. If\nzstd_max_train_bytes
is nonzero, the samples are passed through zstd's\ndictionary generator. Otherwise, the random samples are used directly as\nthe dictionary.
\n\nWhen compression dictionary is disabled, we compress and write each block\nbefore buffering data for the next one. When compression dictionary is\nenabled, we buffer all SST file data in-memory so we can sample it, as data\ncan only be compressed and written after the dictionary has been finalized.\nSo users of this feature may see increased memory usage.
\n\nDefault: 0
\n", "signature": "(self, /, w_bits, level, strategy, max_dict_bytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_zstd_max_train_bytes", "modulename": "rocksdict", "qualname": "Options.set_zstd_max_train_bytes", "kind": "function", "doc": "Sets maximum size of training data passed to zstd's dictionary trainer. Using zstd's\ndictionary trainer can achieve even better compression ratio improvements than using\nmax_dict_bytes
alone.
\n\nThe training data will be used to generate a dictionary of max_dict_bytes.
\n\nDefault: 0.
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compaction_readahead_size", "modulename": "rocksdict", "qualname": "Options.set_compaction_readahead_size", "kind": "function", "doc": "If non-zero, we perform bigger reads when doing compaction. If you're\nrunning RocksDB on spinning disks, you should set this to at least 2MB.\nThat way RocksDB's compaction is doing sequential instead of random reads.
\n\nWhen non-zero, we also force new_table_reader_for_compaction_inputs to\ntrue.
\n\nDefault: 0
\n", "signature": "(self, /, compaction_readahead_size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_compaction_dynamic_level_bytes", "modulename": "rocksdict", "qualname": "Options.set_level_compaction_dynamic_level_bytes", "kind": "function", "doc": "Allow RocksDB to pick dynamic base of bytes for levels.\nWith this feature turned on, RocksDB will automatically adjust max bytes for each level.\nThe goal of this feature is to have lower bound on size amplification.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_prefix_extractor", "modulename": "rocksdict", "qualname": "Options.set_prefix_extractor", "kind": "function", "doc": "\n", "signature": "(self, /, prefix_extractor):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_for_point_lookup", "modulename": "rocksdict", "qualname": "Options.optimize_for_point_lookup", "kind": "function", "doc": "\n", "signature": "(self, /, cache_size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_optimize_filters_for_hits", "modulename": "rocksdict", "qualname": "Options.set_optimize_filters_for_hits", "kind": "function", "doc": "Sets the optimize_filters_for_hits flag
\n\nDefault: false
\n", "signature": "(self, /, optimize_for_hits):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_delete_obsolete_files_period_micros", "modulename": "rocksdict", "qualname": "Options.set_delete_obsolete_files_period_micros", "kind": "function", "doc": "Sets the periodicity when obsolete files get deleted.
\n\nThe files that get out of scope by compaction\nprocess will still get automatically delete on every compaction,\nregardless of this setting.
\n\nDefault: 6 hours
\n", "signature": "(self, /, micros):", "funcdef": "def"}, {"fullname": "rocksdict.Options.prepare_for_bulk_load", "modulename": "rocksdict", "qualname": "Options.prepare_for_bulk_load", "kind": "function", "doc": "Prepare the DB for bulk loading.
\n\nAll data will be in level 0 without any automatic compaction.\nIt's recommended to manually call CompactRange(NULL, NULL) before reading\nfrom the database, because otherwise the read can be very slow.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_open_files", "modulename": "rocksdict", "qualname": "Options.set_max_open_files", "kind": "function", "doc": "Sets the number of open files that can be used by the DB. You may need to\nincrease this if your database has a large working set. Value -1
means\nfiles opened are always kept open. You can estimate number of files based\non target_file_size_base and target_file_size_multiplier for level-based\ncompaction. For universal-style compaction, you can usually set it to -1
.
\n\nDefault: -1
\n", "signature": "(self, /, nfiles):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_file_opening_threads", "modulename": "rocksdict", "qualname": "Options.set_max_file_opening_threads", "kind": "function", "doc": "If max_open_files is -1, DB will open all files on DB::Open(). You can\nuse this option to increase the number of threads used to open the files.\nDefault: 16
\n", "signature": "(self, /, nthreads):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_fsync", "modulename": "rocksdict", "qualname": "Options.set_use_fsync", "kind": "function", "doc": "If true, then every store to stable storage will issue a fsync.\nIf false, then every store to stable storage will issue a fdatasync.\nThis parameter should be set to true while storing data to\nfilesystem like ext3 that can lose files after a reboot.
\n\nDefault: false
\n", "signature": "(self, /, useit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_log_dir", "modulename": "rocksdict", "qualname": "Options.set_db_log_dir", "kind": "function", "doc": "Specifies the absolute info LOG dir.
\n\nIf it is empty, the log files will be in the same dir as data.\nIf it is non empty, the log files will be in the specified dir,\nand the db data dir's absolute path will be used as the log file\nname's prefix.
\n\nDefault: empty
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_bytes_per_sync", "modulename": "rocksdict", "qualname": "Options.set_bytes_per_sync", "kind": "function", "doc": "Allows OS to incrementally sync files to disk while they are being\nwritten, asynchronously, in the background. This operation can be used\nto smooth out write I/Os over time. Users shouldn't rely on it for\npersistency guarantee.\nIssue one request for every bytes_per_sync written. 0
turns it off.
\n\nDefault: 0
\n\nYou may consider using rate_limiter to regulate write rate to device.\nWhen rate limiter is enabled, it automatically enables bytes_per_sync\nto 1MB.
\n\nThis option applies to table files
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_bytes_per_sync", "modulename": "rocksdict", "qualname": "Options.set_wal_bytes_per_sync", "kind": "function", "doc": "Same as bytes_per_sync, but applies to WAL files.
\n\nDefault: 0, turned off
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_writable_file_max_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_writable_file_max_buffer_size", "kind": "function", "doc": "Sets the maximum buffer size that is used by WritableFileWriter.
\n\nOn Windows, we need to maintain an aligned buffer for writes.\nWe allow the buffer to grow until it's size hits the limit in buffered\nIO and fix the buffer size when using direct IO to ensure alignment of\nwrite requests if the logical sector size is unusual
\n\nDefault: 1024 * 1024 (1 MB)
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_concurrent_memtable_write", "modulename": "rocksdict", "qualname": "Options.set_allow_concurrent_memtable_write", "kind": "function", "doc": "If true, allow multi-writers to update mem tables in parallel.\nOnly some memtable_factory-s support concurrent writes; currently it\nis implemented only for SkipListFactory. Concurrent memtable writes\nare not compatible with inplace_update_support or filter_deletes.\nIt is strongly recommended to set enable_write_thread_adaptive_yield\nif you are going to use this feature.
\n\nDefault: true
\n", "signature": "(self, /, allow):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_enable_write_thread_adaptive_yield", "modulename": "rocksdict", "qualname": "Options.set_enable_write_thread_adaptive_yield", "kind": "function", "doc": "If true, threads synchronizing with the write batch group leader will wait for up to\nwrite_thread_max_yield_usec before blocking on a mutex. This can substantially improve\nthroughput for concurrent workloads, regardless of whether allow_concurrent_memtable_write\nis enabled.
\n\nDefault: true
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_sequential_skip_in_iterations", "modulename": "rocksdict", "qualname": "Options.set_max_sequential_skip_in_iterations", "kind": "function", "doc": "Specifies whether an iteration->Next() sequentially skips over keys with the same user-key or not.
\n\nThis number specifies the number of keys (with the same userkey)\nthat will be sequentially skipped before a reseek is issued.
\n\nDefault: 8
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_direct_reads", "modulename": "rocksdict", "qualname": "Options.set_use_direct_reads", "kind": "function", "doc": "Enable direct I/O mode for reading\nthey may or may not improve performance depending on the use case
\n\nFiles will be opened in \"direct I/O\" mode\nwhich means that data read from the disk will not be cached or\nbuffered. The hardware buffer of the devices may however still\nbe used. Memory mapped files are not impacted by these parameters.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_direct_io_for_flush_and_compaction", "modulename": "rocksdict", "qualname": "Options.set_use_direct_io_for_flush_and_compaction", "kind": "function", "doc": "Enable direct I/O mode for flush and compaction
\n\nFiles will be opened in \"direct I/O\" mode\nwhich means that data written to the disk will not be cached or\nbuffered. The hardware buffer of the devices may however still\nbe used. Memory mapped files are not impacted by these parameters.\nthey may or may not improve performance depending on the use case
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_is_fd_close_on_exec", "modulename": "rocksdict", "qualname": "Options.set_is_fd_close_on_exec", "kind": "function", "doc": "Enable/dsiable child process inherit open files.
\n\nDefault: true
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_table_cache_num_shard_bits", "modulename": "rocksdict", "qualname": "Options.set_table_cache_num_shard_bits", "kind": "function", "doc": "Sets the number of shards used for table cache.
\n\nDefault: 6
\n", "signature": "(self, /, nbits):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_target_file_size_multiplier", "modulename": "rocksdict", "qualname": "Options.set_target_file_size_multiplier", "kind": "function", "doc": "By default target_file_size_multiplier is 1, which means\nby default files in different levels will have similar size.
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, multiplier):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_write_buffer_number", "modulename": "rocksdict", "qualname": "Options.set_min_write_buffer_number", "kind": "function", "doc": "Sets the minimum number of write buffers that will be merged together\nbefore writing to storage. If set to 1
, then\nall write buffers are flushed to L0 as individual files and this increases\nread amplification because a get request has to check in all of these\nfiles. Also, an in-memory merge may result in writing lesser\ndata to storage if there are duplicate records in each of these\nindividual write buffers.
\n\nDefault: 1
\n", "signature": "(self, /, nbuf):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_write_buffer_number", "modulename": "rocksdict", "qualname": "Options.set_max_write_buffer_number", "kind": "function", "doc": "Sets the maximum number of write buffers that are built up in memory.\nThe default and the minimum number is 2, so that when 1 write buffer\nis being flushed to storage, new writes can continue to the other\nwrite buffer.\nIf max_write_buffer_number > 3, writing will be slowed down to\noptions.delayed_write_rate if we are writing to the last write buffer\nallowed.
\n\nDefault: 2
\n", "signature": "(self, /, nbuf):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_write_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_write_buffer_size", "kind": "function", "doc": "Sets the amount of data to build up in memory (backed by an unsorted log\non disk) before converting to a sorted on-disk file.
\n\nLarger values increase performance, especially during bulk loads.\nUp to max_write_buffer_number write buffers may be held in memory\nat the same time,\nso you may wish to adjust this parameter to control memory usage.\nAlso, a larger write buffer will result in a longer recovery time\nthe next time the database is opened.
\n\nNote that write_buffer_size is enforced per column family.\nSee db_write_buffer_size for sharing memory across column families.
\n\nDefault: 0x4000000
(64MiB)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_write_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_db_write_buffer_size", "kind": "function", "doc": "Amount of data to build up in memtables across all column\nfamilies before writing to disk.
\n\nThis is distinct from write_buffer_size, which enforces a limit\nfor a single memtable.
\n\nThis feature is disabled by default. Specify a non-zero value\nto enable it.
\n\nDefault: 0 (disabled)
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_base", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_base", "kind": "function", "doc": "Control maximum total data size for a level.\nmax_bytes_for_level_base is the max total for level-1.\nMaximum number of bytes for level L can be calculated as\n(max_bytes_for_level_base) * (max_bytes_for_level_multiplier ^ (L-1))\nFor example, if max_bytes_for_level_base is 200MB, and if\nmax_bytes_for_level_multiplier is 10, total data size for level-1\nwill be 200MB, total file size for level-2 will be 2GB,\nand total file size for level-3 will be 20GB.
\n\nDefault: 0x10000000
(256MiB).
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_multiplier", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_multiplier", "kind": "function", "doc": "Default: 10
\n", "signature": "(self, /, mul):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_manifest_file_size", "modulename": "rocksdict", "qualname": "Options.set_max_manifest_file_size", "kind": "function", "doc": "The manifest file is rolled over on reaching this limit.\nThe older manifest file be deleted.\nThe default value is MAX_INT so that roll-over does not take place.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_target_file_size_base", "modulename": "rocksdict", "qualname": "Options.set_target_file_size_base", "kind": "function", "doc": "Sets the target file size for compaction.\ntarget_file_size_base is per-file size for level-1.\nTarget file size for level L can be calculated by\ntarget_file_size_base * (target_file_size_multiplier ^ (L-1))\nFor example, if target_file_size_base is 2MB and\ntarget_file_size_multiplier is 10, then each file on level-1 will\nbe 2MB, and each file on level 2 will be 20MB,\nand each file on level-3 will be 200MB.
\n\nDefault: 0x4000000
(64MiB)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_write_buffer_number_to_merge", "modulename": "rocksdict", "qualname": "Options.set_min_write_buffer_number_to_merge", "kind": "function", "doc": "Sets the minimum number of write buffers that will be merged together\nbefore writing to storage. If set to 1
, then\nall write buffers are flushed to L0 as individual files and this increases\nread amplification because a get request has to check in all of these\nfiles. Also, an in-memory merge may result in writing lesser\ndata to storage if there are duplicate records in each of these\nindividual write buffers.
\n\nDefault: 1
\n", "signature": "(self, /, to_merge):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_file_num_compaction_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_file_num_compaction_trigger", "kind": "function", "doc": "Sets the number of files to trigger level-0 compaction. A value < 0
means that\nlevel-0 compaction will not be triggered by number of files at all.
\n\nDefault: 4
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_slowdown_writes_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_slowdown_writes_trigger", "kind": "function", "doc": "Sets the soft limit on number of level-0 files. We start slowing down writes at this\npoint. A value < 0
means that no writing slow down will be triggered by\nnumber of files in level-0.
\n\nDefault: 20
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_stop_writes_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_stop_writes_trigger", "kind": "function", "doc": "Sets the maximum number of level-0 files. We stop writes at this point.
\n\nDefault: 24
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compaction_style", "modulename": "rocksdict", "qualname": "Options.set_compaction_style", "kind": "function", "doc": "Sets the compaction style.
\n\nDefault: DBCompactionStyle.level()
\n", "signature": "(self, /, style):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_universal_compaction_options", "modulename": "rocksdict", "qualname": "Options.set_universal_compaction_options", "kind": "function", "doc": "Sets the options needed to support Universal Style compactions.
\n", "signature": "(self, /, uco):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_fifo_compaction_options", "modulename": "rocksdict", "qualname": "Options.set_fifo_compaction_options", "kind": "function", "doc": "Sets the options for FIFO compaction style.
\n", "signature": "(self, /, fco):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_unordered_write", "modulename": "rocksdict", "qualname": "Options.set_unordered_write", "kind": "function", "doc": "Sets unordered_write to true trades higher write throughput with\nrelaxing the immutability guarantee of snapshots. This violates the\nrepeatability one expects from ::Get from a snapshot, as well as\n:MultiGet and Iterator's consistent-point-in-time view property.\nIf the application cannot tolerate the relaxed guarantees, it can implement\nits own mechanisms to work around that and yet benefit from the higher\nthroughput. Using TransactionDB with WRITE_PREPARED write policy and\ntwo_write_queues=true is one way to achieve immutable snapshots despite\nunordered_write.
\n\nBy default, i.e., when it is false, rocksdb does not advance the sequence\nnumber for new snapshots unless all the writes with lower sequence numbers\nare already finished. This provides the immutability that we except from\nsnapshots. Moreover, since Iterator and MultiGet internally depend on\nsnapshots, the snapshot immutability results into Iterator and MultiGet\noffering consistent-point-in-time view. If set to true, although\nRead-Your-Own-Write property is still provided, the snapshot immutability\nproperty is relaxed: the writes issued after the snapshot is obtained (with\nlarger sequence numbers) will be still not visible to the reads from that\nsnapshot, however, there still might be pending writes (with lower sequence\nnumber) that will change the state visible to the snapshot after they are\nlanded to the memtable.
\n\nDefault: false
\n", "signature": "(self, /, unordered):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_subcompactions", "modulename": "rocksdict", "qualname": "Options.set_max_subcompactions", "kind": "function", "doc": "Sets maximum number of threads that will\nconcurrently perform a compaction job by breaking it into multiple,\nsmaller ones that are run simultaneously.
\n\nDefault: 1 (i.e. no subcompactions)
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_background_jobs", "modulename": "rocksdict", "qualname": "Options.set_max_background_jobs", "kind": "function", "doc": "Sets maximum number of concurrent background jobs\n(compactions and flushes).
\n\nDefault: 2
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, jobs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_disable_auto_compactions", "modulename": "rocksdict", "qualname": "Options.set_disable_auto_compactions", "kind": "function", "doc": "Disables automatic compactions. Manual compactions can still\nbe issued on this column family
\n\nDefault: false
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, disable):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_huge_page_size", "modulename": "rocksdict", "qualname": "Options.set_memtable_huge_page_size", "kind": "function", "doc": "SetMemtableHugePageSize sets the page size for huge page for\narena used by the memtable.\nIf <=0, it won't allocate from huge page but from malloc.\nUsers are responsible to reserve huge pages for it to be allocated. For\nexample:\n sysctl -w vm.nr_hugepages=20\nSee linux doc Documentation/vm/hugetlbpage.txt\nIf there isn't enough free huge page available, it will fall back to\nmalloc.
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_successive_merges", "modulename": "rocksdict", "qualname": "Options.set_max_successive_merges", "kind": "function", "doc": "Sets the maximum number of successive merge operations on a key in the memtable.
\n\nWhen a merge operation is added to the memtable and the maximum number of\nsuccessive merges is reached, the value of the key will be calculated and\ninserted into the memtable instead of the merge operation. This will\nensure that there are never more than max_successive_merges merge\noperations in the memtable.
\n\nDefault: 0 (disabled)
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_bloom_locality", "modulename": "rocksdict", "qualname": "Options.set_bloom_locality", "kind": "function", "doc": "Control locality of bloom filter probes to improve cache miss rate.\nThis option only applies to memtable prefix bloom and plaintable\nprefix bloom. It essentially limits the max number of cache lines each\nbloom filter check can touch.
\n\nThis optimization is turned off when set to 0. The number should never\nbe greater than number of probes. This option can boost performance\nfor in-memory workload but should use with care since it can cause\nhigher false positive rate.
\n\nDefault: 0
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_inplace_update_support", "modulename": "rocksdict", "qualname": "Options.set_inplace_update_support", "kind": "function", "doc": "Enable/disable thread-safe inplace updates.
\n\nRequires updates if
\n\n\n- key exists in current memtable
\n- new sizeof(new_value) <= sizeof(old_value)
\n- old_value for that key is a put i.e. kTypeValue
\n
\n\nDefault: false.
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_inplace_update_locks", "modulename": "rocksdict", "qualname": "Options.set_inplace_update_locks", "kind": "function", "doc": "Sets the number of locks used for inplace update.
\n\nDefault: 10000 when inplace_update_support = true, otherwise 0.
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_multiplier_additional", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_multiplier_additional", "kind": "function", "doc": "Different max-size multipliers for different levels.\nThese are multiplied by max_bytes_for_level_multiplier to arrive\nat the max-size of each level.
\n\nDefault: 1
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, level_values):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_skip_checking_sst_file_sizes_on_db_open", "modulename": "rocksdict", "qualname": "Options.set_skip_checking_sst_file_sizes_on_db_open", "kind": "function", "doc": "If true, then DB::Open() will not fetch and check sizes of all sst files.\nThis may significantly speed up startup if there are many sst files,\nespecially when using non-default Env with expensive GetFileSize().\nWe'll still check that all required sst files exist.\nIf paranoid_checks is false, this option is ignored, and sst files are\nnot checked at all.
\n\nDefault: false
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_write_buffer_size_to_maintain", "modulename": "rocksdict", "qualname": "Options.set_max_write_buffer_size_to_maintain", "kind": "function", "doc": "The total maximum size(bytes) of write buffers to maintain in memory\nincluding copies of buffers that have already been flushed. This parameter\nonly affects trimming of flushed buffers and does not affect flushing.\nThis controls the maximum amount of write history that will be available\nin memory for conflict checking when Transactions are used. The actual\nsize of write history (flushed Memtables) might be higher than this limit\nif further trimming will reduce write history total size below this\nlimit. For example, if max_write_buffer_size_to_maintain is set to 64MB,\nand there are three flushed Memtables, with sizes of 32MB, 20MB, 20MB.\nBecause trimming the next Memtable of size 20MB will reduce total memory\nusage to 52MB which is below the limit, RocksDB will stop trimming.
\n\nWhen using an OptimisticTransactionDB:\nIf this value is too low, some transactions may fail at commit time due\nto not being able to determine whether there were any write conflicts.
\n\nWhen using a TransactionDB:\nIf Transaction::SetSnapshot is used, TransactionDB will read either\nin-memory write buffers or SST files to do write-conflict checking.\nIncreasing this value can reduce the number of reads to SST files\ndone for conflict detection.
\n\nSetting this value to 0 will cause write buffers to be freed immediately\nafter they are flushed. If this value is set to -1,\n'max_write_buffer_number * write_buffer_size' will be used.
\n\nDefault:\nIf using a TransactionDB/OptimisticTransactionDB, the default value will\nbe set to the value of 'max_write_buffer_number * write_buffer_size'\nif it is not explicitly set by the user. Otherwise, the default is 0.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_enable_pipelined_write", "modulename": "rocksdict", "qualname": "Options.set_enable_pipelined_write", "kind": "function", "doc": "By default, a single write thread queue is maintained. The thread gets\nto the head of the queue becomes write batch group leader and responsible\nfor writing to WAL and memtable for the batch group.
\n\nIf enable_pipelined_write is true, separate write thread queue is\nmaintained for WAL write and memtable write. A write thread first enter WAL\nwriter queue and then memtable writer queue. Pending thread on the WAL\nwriter queue thus only have to wait for previous writers to finish their\nWAL writing but not the memtable writing. Enabling the feature may improve\nwrite throughput and reduce latency of the prepare phase of two-phase\ncommit.
\n\nDefault: false
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_factory", "modulename": "rocksdict", "qualname": "Options.set_memtable_factory", "kind": "function", "doc": "Defines the underlying memtable implementation.\nSee official wiki for more information.\nDefaults to using a skiplist.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, MemtableFactory\nopts = Options()\nfactory = MemtableFactory.hash_skip_list(bucket_count=1_000_000,\n height=4,\n branching_factor=4)\n\nopts.set_allow_concurrent_memtable_write(false)\nopts.set_memtable_factory(factory)\n
\n
\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_block_based_table_factory", "modulename": "rocksdict", "qualname": "Options.set_block_based_table_factory", "kind": "function", "doc": "\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_cuckoo_table_factory", "modulename": "rocksdict", "qualname": "Options.set_cuckoo_table_factory", "kind": "function", "doc": "Sets the table factory to a CuckooTableFactory (the default table\nfactory is a block-based table factory that provides a default\nimplementation of TableBuilder and TableReader with default\nBlockBasedTableOptions).\nSee official wiki for more information on this table format.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, CuckooTableOptions\n\nopts = Options()\nfactory_opts = CuckooTableOptions()\nfactory_opts.set_hash_ratio(0.8)\nfactory_opts.set_max_search_depth(20)\nfactory_opts.set_cuckoo_block_size(10)\nfactory_opts.set_identity_as_first_hash(true)\nfactory_opts.set_use_module_hash(false)\n\nopts.set_cuckoo_table_factory(factory_opts)\n
\n
\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_plain_table_factory", "modulename": "rocksdict", "qualname": "Options.set_plain_table_factory", "kind": "function", "doc": "This is a factory that provides TableFactory objects.\nDefault: a block-based table factory that provides a default\nimplementation of TableBuilder and TableReader with default\nBlockBasedTableOptions.\nSets the factory as plain table.\nSee official wiki for more\ninformation.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, PlainTableFactoryOptions\n\nopts = Options()\nfactory_opts = PlainTableFactoryOptions()\nfactory_opts.user_key_length = 0\nfactory_opts.bloom_bits_per_key = 20\nfactory_opts.hash_table_ratio = 0.75\nfactory_opts.index_sparseness = 16\n\nopts.set_plain_table_factory(factory_opts)\n
\n
\n", "signature": "(self, /, options):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_level_to_compress", "modulename": "rocksdict", "qualname": "Options.set_min_level_to_compress", "kind": "function", "doc": "Sets the start level to use compression.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_report_bg_io_stats", "modulename": "rocksdict", "qualname": "Options.set_report_bg_io_stats", "kind": "function", "doc": "Measure IO stats in compactions and flushes, if true
.
\n\nDefault: false
\n", "signature": "(self, /, enable):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_total_wal_size", "modulename": "rocksdict", "qualname": "Options.set_max_total_wal_size", "kind": "function", "doc": "Once write-ahead logs exceed this size, we will start forcing the flush of\ncolumn families whose memtables are backed by the oldest live WAL file\n(i.e. the ones that are causing all the space amplification).
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_recovery_mode", "modulename": "rocksdict", "qualname": "Options.set_wal_recovery_mode", "kind": "function", "doc": "Recovery mode to control the consistency while replaying WAL.
\n\nDefault: DBRecoveryMode::PointInTime
\n", "signature": "(self, /, mode):", "funcdef": "def"}, {"fullname": "rocksdict.Options.enable_statistics", "modulename": "rocksdict", "qualname": "Options.enable_statistics", "kind": "function", "doc": "\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.get_statistics", "modulename": "rocksdict", "qualname": "Options.get_statistics", "kind": "function", "doc": "\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_stats_dump_period_sec", "modulename": "rocksdict", "qualname": "Options.set_stats_dump_period_sec", "kind": "function", "doc": "If not zero, dump rocksdb.stats
to LOG every stats_dump_period_sec
.
\n\nDefault: 600
(10 mins)
\n", "signature": "(self, /, period):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_stats_persist_period_sec", "modulename": "rocksdict", "qualname": "Options.set_stats_persist_period_sec", "kind": "function", "doc": "If not zero, dump rocksdb.stats to RocksDB to LOG every stats_persist_period_sec
.
\n\nDefault: 600
(10 mins)
\n", "signature": "(self, /, period):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_advise_random_on_open", "modulename": "rocksdict", "qualname": "Options.set_advise_random_on_open", "kind": "function", "doc": "When set to true, reading SST files will opt out of the filesystem's\nreadahead. Setting this to false may improve sequential iteration\nperformance.
\n\nDefault: true
\n", "signature": "(self, /, advise):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_adaptive_mutex", "modulename": "rocksdict", "qualname": "Options.set_use_adaptive_mutex", "kind": "function", "doc": "Enable/disable adaptive mutex, which spins in the user space before resorting to kernel.
\n\nThis could reduce context switch when the mutex is not\nheavily contended. However, if the mutex is hot, we could end up\nwasting spin time.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_num_levels", "modulename": "rocksdict", "qualname": "Options.set_num_levels", "kind": "function", "doc": "Sets the number of levels for this database.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_prefix_bloom_ratio", "modulename": "rocksdict", "qualname": "Options.set_memtable_prefix_bloom_ratio", "kind": "function", "doc": "When a prefix_extractor
is defined through opts.set_prefix_extractor
this\ncreates a prefix bloom filter for each memtable with the size of\nwrite_buffer_size * memtable_prefix_bloom_ratio
(capped at 0.25).
\n\nDefault: 0
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_compaction_bytes", "modulename": "rocksdict", "qualname": "Options.set_max_compaction_bytes", "kind": "function", "doc": "Sets the maximum number of bytes in all compacted files.\nWe try to limit number of bytes in one compaction to be lower than this\nthreshold. But it's not guaranteed.
\n\nValue 0 will be sanitized.
\n\nDefault: target_file_size_base * 25
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_dir", "modulename": "rocksdict", "qualname": "Options.set_wal_dir", "kind": "function", "doc": "Specifies the absolute path of the directory the\nwrite-ahead log (WAL) should be written to.
\n\nDefault: same directory as the database
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_ttl_seconds", "modulename": "rocksdict", "qualname": "Options.set_wal_ttl_seconds", "kind": "function", "doc": "Sets the WAL ttl in seconds.
\n\nThe following two options affect how archived logs will be deleted.
\n\n\n- If both set to 0, logs will be deleted asap and will not get into\nthe archive.
\n- If wal_ttl_seconds is 0 and wal_size_limit_mb is not 0,\nWAL files will be checked every 10 min and if total size is greater\nthen wal_size_limit_mb, they will be deleted starting with the\nearliest until size_limit is met. All empty files will be deleted.
\n- If wal_ttl_seconds is not 0 and wall_size_limit_mb is 0, then\nWAL files will be checked every wal_ttl_seconds / 2 and those that\nare older than wal_ttl_seconds will be deleted.
\n- If both are not 0, WAL files will be checked every 10 min and both\nchecks will be performed with ttl being first.
\n
\n\nDefault: 0
\n", "signature": "(self, /, secs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_size_limit_mb", "modulename": "rocksdict", "qualname": "Options.set_wal_size_limit_mb", "kind": "function", "doc": "Sets the WAL size limit in MB.
\n\nIf total size of WAL files is greater then wal_size_limit_mb,\nthey will be deleted starting with the earliest until size_limit is met.
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_manifest_preallocation_size", "modulename": "rocksdict", "qualname": "Options.set_manifest_preallocation_size", "kind": "function", "doc": "Sets the number of bytes to preallocate (via fallocate) the manifest files.
\n\nDefault is 4MB, which is reasonable to reduce random IO\nas well as prevent overallocation for mounts that preallocate\nlarge amounts of data (such as xfs's allocsize option).
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_skip_stats_update_on_db_open", "modulename": "rocksdict", "qualname": "Options.set_skip_stats_update_on_db_open", "kind": "function", "doc": "If true, then DB::Open() will not update the statistics used to optimize\ncompaction decision by loading table properties from many files.\nTurning off this feature will improve DBOpen time especially in disk environment.
\n\nDefault: false
\n", "signature": "(self, /, skip):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_keep_log_file_num", "modulename": "rocksdict", "qualname": "Options.set_keep_log_file_num", "kind": "function", "doc": "Specify the maximal number of info log files to be kept.
\n\nDefault: 1000
\n", "signature": "(self, /, nfiles):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_mmap_writes", "modulename": "rocksdict", "qualname": "Options.set_allow_mmap_writes", "kind": "function", "doc": "Allow the OS to mmap file for writing.
\n\nDefault: false
\n", "signature": "(self, /, is_enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_mmap_reads", "modulename": "rocksdict", "qualname": "Options.set_allow_mmap_reads", "kind": "function", "doc": "Allow the OS to mmap file for reading sst tables.
\n\nDefault: false
\n", "signature": "(self, /, is_enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_atomic_flush", "modulename": "rocksdict", "qualname": "Options.set_atomic_flush", "kind": "function", "doc": "Guarantee that all column families are flushed together atomically.\nThis option applies to both manual flushes (db.flush()
) and automatic\nbackground flushes caused when memtables are filled.
\n\nNote that this is only useful when the WAL is disabled. When using the\nWAL, writes are always consistent across column families.
\n\nDefault: false
\n", "signature": "(self, /, atomic_flush):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_row_cache", "modulename": "rocksdict", "qualname": "Options.set_row_cache", "kind": "function", "doc": "Sets global cache for table-level rows. Cache must outlive DB instance which uses it.
\n\nDefault: null (disabled)\nNot supported in ROCKSDB_LITE mode!
\n", "signature": "(self, /, cache):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_ratelimiter", "modulename": "rocksdict", "qualname": "Options.set_ratelimiter", "kind": "function", "doc": "Use to control write rate of flush and compaction. Flush has higher\npriority than compaction.\nIf rate limiter is enabled, bytes_per_sync is set to 1MB by default.
\n\nDefault: disable
\n", "signature": "(self, /, rate_bytes_per_sec, refill_period_us, fairness):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_log_file_size", "modulename": "rocksdict", "qualname": "Options.set_max_log_file_size", "kind": "function", "doc": "Sets the maximal size of the info log file.
\n\nIf the log file is larger than max_log_file_size
, a new info log file\nwill be created. If max_log_file_size
is equal to zero, all logs will\nbe written to one log file.
\n\nDefault: 0
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options\n\noptions = Options()\noptions.set_max_log_file_size(0)\n
\n
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_log_file_time_to_roll", "modulename": "rocksdict", "qualname": "Options.set_log_file_time_to_roll", "kind": "function", "doc": "Sets the time for the info log file to roll (in seconds).
\n\nIf specified with non-zero value, log file will be rolled\nif it has been active longer than log_file_time_to_roll
.\nDefault: 0 (disabled)
\n", "signature": "(self, /, secs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_recycle_log_file_num", "modulename": "rocksdict", "qualname": "Options.set_recycle_log_file_num", "kind": "function", "doc": "Controls the recycling of log files.
\n\nIf non-zero, previously written log files will be reused for new logs,\noverwriting the old data. The value indicates how many such files we will\nkeep around at any point in time for later use. This is more efficient\nbecause the blocks are already allocated and fdatasync does not need to\nupdate the inode after each write.
\n\nDefault: 0
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options\n\noptions = Options()\noptions.set_recycle_log_file_num(5)\n
\n
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_soft_pending_compaction_bytes_limit", "modulename": "rocksdict", "qualname": "Options.set_soft_pending_compaction_bytes_limit", "kind": "function", "doc": "Sets the threshold at which all writes will be slowed down to at least delayed_write_rate if estimated\nbytes needed to be compaction exceed this threshold.
\n\nDefault: 64GB
\n", "signature": "(self, /, limit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_hard_pending_compaction_bytes_limit", "modulename": "rocksdict", "qualname": "Options.set_hard_pending_compaction_bytes_limit", "kind": "function", "doc": "Sets the bytes threshold at which all writes are stopped if estimated bytes needed to be compaction exceed\nthis threshold.
\n\nDefault: 256GB
\n", "signature": "(self, /, limit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_arena_block_size", "modulename": "rocksdict", "qualname": "Options.set_arena_block_size", "kind": "function", "doc": "Sets the size of one block in arena memory allocation.
\n\nIf <= 0, a proper value is automatically calculated (usually 1/10 of\nwriter_buffer_size).
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_dump_malloc_stats", "modulename": "rocksdict", "qualname": "Options.set_dump_malloc_stats", "kind": "function", "doc": "If true, then print malloc stats together with rocksdb.stats when printing to LOG.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_whole_key_filtering", "modulename": "rocksdict", "qualname": "Options.set_memtable_whole_key_filtering", "kind": "function", "doc": "Enable whole key bloom filter in memtable. Note this will only take effect\nif memtable_prefix_bloom_size_ratio is not 0. Enabling whole key filtering\ncan potentially reduce CPU usage for point-look-ups.
\n\nDefault: false (disable)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, whole_key_filter):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions", "modulename": "rocksdict", "qualname": "ReadOptions", "kind": "class", "doc": "ReadOptions allows setting iterator bounds and so on.
\n\nArguments:
\n\n\n- raw_mode (bool): this must be the same as
Options
raw_mode\nargument. \n
\n"}, {"fullname": "rocksdict.ReadOptions.__init__", "modulename": "rocksdict", "qualname": "ReadOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.ReadOptions.fill_cache", "modulename": "rocksdict", "qualname": "ReadOptions.fill_cache", "kind": "function", "doc": "Specify whether the \"data block\"/\"index block\"/\"filter block\"\nread for this iteration should be cached in memory?\nCallers may wish to set this field to false for bulk scans.
\n\nDefault: true
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_iterate_upper_bound", "modulename": "rocksdict", "qualname": "ReadOptions.set_iterate_upper_bound", "kind": "function", "doc": "Sets the upper bound for an iterator.
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_iterate_lower_bound", "modulename": "rocksdict", "qualname": "ReadOptions.set_iterate_lower_bound", "kind": "function", "doc": "Sets the lower bound for an iterator.
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_prefix_same_as_start", "modulename": "rocksdict", "qualname": "ReadOptions.set_prefix_same_as_start", "kind": "function", "doc": "Enforce that the iterator only iterates over the same\nprefix as the seek.\nThis option is effective only for prefix seeks, i.e. prefix_extractor is\nnon-null for the column family and total_order_seek is false. Unlike\niterate_upper_bound, prefix_same_as_start only works within a prefix\nbut in both directions.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_total_order_seek", "modulename": "rocksdict", "qualname": "ReadOptions.set_total_order_seek", "kind": "function", "doc": "Enable a total order seek regardless of index format (e.g. hash index)\nused in the table. Some table format (e.g. plain table) may not support\nthis option.
\n\nIf true when calling Get(), we also skip prefix bloom when reading from\nblock based table. It provides a way to read existing data after\nchanging implementation of prefix extractor.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_max_skippable_internal_keys", "modulename": "rocksdict", "qualname": "ReadOptions.set_max_skippable_internal_keys", "kind": "function", "doc": "Sets a threshold for the number of keys that can be skipped\nbefore failing an iterator seek as incomplete. The default value of 0 should be used to\nnever fail a request as incomplete, even on skipping too many keys.
\n\nDefault: 0
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_background_purge_on_iterator_cleanup", "modulename": "rocksdict", "qualname": "ReadOptions.set_background_purge_on_iterator_cleanup", "kind": "function", "doc": "If true, when PurgeObsoleteFile is called in CleanupIteratorState, we schedule a background job\nin the flush job queue and delete obsolete files in background.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_ignore_range_deletions", "modulename": "rocksdict", "qualname": "ReadOptions.set_ignore_range_deletions", "kind": "function", "doc": "If true, keys deleted using the DeleteRange() API will be visible to\nreaders until they are naturally deleted during compaction. This improves\nread performance in DBs with many range deletions.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_verify_checksums", "modulename": "rocksdict", "qualname": "ReadOptions.set_verify_checksums", "kind": "function", "doc": "If true, all data read from underlying storage will be\nverified against corresponding checksums.
\n\nDefault: true
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_readahead_size", "modulename": "rocksdict", "qualname": "ReadOptions.set_readahead_size", "kind": "function", "doc": "If non-zero, an iterator will create a new table reader which\nperforms reads of the given size. Using a large size (> 2MB) can\nimprove the performance of forward iteration on spinning disks.\nDefault: 0
\n\nfrom rocksdict import ReadOptions
\n\nopts = ReadOptions()\nopts.set_readahead_size(4_194_304) # 4mb
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_tailing", "modulename": "rocksdict", "qualname": "ReadOptions.set_tailing", "kind": "function", "doc": "If true, create a tailing iterator. Note that tailing iterators\nonly support moving in the forward direction. Iterating in reverse\nor seek_to_last are not supported.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_pin_data", "modulename": "rocksdict", "qualname": "ReadOptions.set_pin_data", "kind": "function", "doc": "Specifies the value of \"pin_data\". If true, it keeps the blocks\nloaded by the iterator pinned in memory as long as the iterator is not deleted,\nIf used when reading from tables created with\nBlockBasedTableOptions::use_delta_encoding = false,\nIterator's property \"rocksdb.iterator.is-key-pinned\" is guaranteed to\nreturn 1.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_async_io", "modulename": "rocksdict", "qualname": "ReadOptions.set_async_io", "kind": "function", "doc": "Asynchronously prefetch some data.
\n\nUsed for sequential reads and internal automatic prefetching.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ColumnFamily", "modulename": "rocksdict", "qualname": "ColumnFamily", "kind": "class", "doc": "Column family handle. This can be used in WriteBatch to specify Column Family.
\n"}, {"fullname": "rocksdict.ColumnFamily.__init__", "modulename": "rocksdict", "qualname": "ColumnFamily.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.IngestExternalFileOptions", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.IngestExternalFileOptions.__init__", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_move_files", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_move_files", "kind": "function", "doc": "Can be set to true to move the files instead of copying them.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_snapshot_consistency", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_snapshot_consistency", "kind": "function", "doc": "If set to false, an ingested file keys could appear in existing snapshots\nthat where created before the file was ingested.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_allow_global_seqno", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_allow_global_seqno", "kind": "function", "doc": "If set to false, IngestExternalFile() will fail if the file key range\noverlaps with existing keys or tombstones in the DB.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_allow_blocking_flush", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_allow_blocking_flush", "kind": "function", "doc": "If set to false and the file key range overlaps with the memtable key range\n(memtable flush required), IngestExternalFile will fail.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_ingest_behind", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_ingest_behind", "kind": "function", "doc": "Set to true if you would like duplicate keys in the file being ingested\nto be skipped rather than overwriting existing data under that key.\nUsecase: back-fill of some historical data in the database without\nover-writing existing newer version of data.\nThis option could only be used if the DB has been running\nwith allow_ingest_behind=true since the dawn of time.\nAll files will be ingested at the bottommost level with seqno=0.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.DBPath", "modulename": "rocksdict", "qualname": "DBPath", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.DBPath.__init__", "modulename": "rocksdict", "qualname": "DBPath.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.MemtableFactory", "modulename": "rocksdict", "qualname": "MemtableFactory", "kind": "class", "doc": "Defines the underlying memtable implementation.\nSee official wiki for more information.
\n"}, {"fullname": "rocksdict.MemtableFactory.__init__", "modulename": "rocksdict", "qualname": "MemtableFactory.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.MemtableFactory.vector", "modulename": "rocksdict", "qualname": "MemtableFactory.vector", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.MemtableFactory.hash_skip_list", "modulename": "rocksdict", "qualname": "MemtableFactory.hash_skip_list", "kind": "function", "doc": "\n", "signature": "(bucket_count, height, branching_factor):", "funcdef": "def"}, {"fullname": "rocksdict.MemtableFactory.hash_link_list", "modulename": "rocksdict", "qualname": "MemtableFactory.hash_link_list", "kind": "function", "doc": "\n", "signature": "(bucket_count):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions", "modulename": "rocksdict", "qualname": "BlockBasedOptions", "kind": "class", "doc": "For configuring block-based file storage.
\n"}, {"fullname": "rocksdict.BlockBasedOptions.__init__", "modulename": "rocksdict", "qualname": "BlockBasedOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_size", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_size", "kind": "function", "doc": "Approximate size of user data packed per block. Note that the\nblock size specified here corresponds to uncompressed data. The\nactual size of the unit read from disk may be smaller if\ncompression is enabled. This parameter can be changed dynamically.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_metadata_block_size", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_metadata_block_size", "kind": "function", "doc": "Block size for partitioned metadata. Currently applied to indexes when\nkTwoLevelIndexSearch is used and to filters when partition_filters is used.\nNote: Since in the current implementation the filters and index partitions\nare aligned, an index/filter block is created when either index or filter\nblock size reaches the specified limit.
\n\nNote: this limit is currently applied to only index blocks; a filter\npartition is cut right after an index block is cut.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_partition_filters", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_partition_filters", "kind": "function", "doc": "Note: currently this option requires kTwoLevelIndexSearch to be set as\nwell.
\n\nUse partitioned full filters for each SST file. This option is\nincompatible with block-based filters.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_cache", "kind": "function", "doc": "Sets global cache for blocks (user data is stored in a set of blocks, and\na block is the unit of reading from disk). Cache must outlive DB instance which uses it.
\n\nIf set, use the specified cache for blocks.\nBy default, rocksdb will automatically create and use an 8MB internal cache.
\n", "signature": "(self, /, cache):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.disable_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.disable_cache", "kind": "function", "doc": "Disable block cache
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_bloom_filter", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_bloom_filter", "kind": "function", "doc": "Sets the filter policy to reduce disk read
\n", "signature": "(self, /, bits_per_key, block_based):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_cache_index_and_filter_blocks", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_cache_index_and_filter_blocks", "kind": "function", "doc": "\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_index_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_index_type", "kind": "function", "doc": "Defines the index type to be used for SS-table lookups.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import BlockBasedOptions, BlockBasedIndexType, Options\n\nopts = Options()\nblock_opts = BlockBasedOptions()\nblock_opts.set_index_type(BlockBasedIndexType.hash_search())\nopts.set_block_based_table_factory(block_opts)\n
\n
\n", "signature": "(self, /, index_type):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_pin_l0_filter_and_index_blocks_in_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_pin_l0_filter_and_index_blocks_in_cache", "kind": "function", "doc": "If cache_index_and_filter_blocks is true and the below is true, then\nfilter and index blocks are stored in the cache, but a reference is\nheld in the \"table reader\" object so the blocks are pinned and only\nevicted from cache when the table reader is freed.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_pin_top_level_index_and_filter", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_pin_top_level_index_and_filter", "kind": "function", "doc": "If cache_index_and_filter_blocks is true and the below is true, then\nthe top-level index of partitioned filter and index blocks are stored in\nthe cache, but a reference is held in the \"table reader\" object so the\nblocks are pinned and only evicted from cache when the table reader is\nfreed. This is not limited to l0 in LSM tree.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_format_version", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_format_version", "kind": "function", "doc": "Format version, reserved for backward compatibility.
\n\nSee full list\nof the supported versions.
\n\nDefault: 2.
\n", "signature": "(self, /, version):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_restart_interval", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_restart_interval", "kind": "function", "doc": "Number of keys between restart points for delta encoding of keys.\nThis parameter can be changed dynamically. Most clients should\nleave this parameter alone. The minimum value allowed is 1. Any smaller\nvalue will be silently overwritten with 1.
\n\nDefault: 16.
\n", "signature": "(self, /, interval):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_index_block_restart_interval", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_index_block_restart_interval", "kind": "function", "doc": "Same as block_restart_interval but used for the index block.\nIf you don't plan to run RocksDB before version 5.16 and you are\nusing index_block_restart_interval
> 1, you should\nprobably set the format_version
to >= 4 as it would reduce the index size.
\n\nDefault: 1.
\n", "signature": "(self, /, interval):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_data_block_index_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_data_block_index_type", "kind": "function", "doc": "Set the data block index type for point lookups:
\n\n\n DataBlockIndexType::BinarySearch
to use binary search within the data block.\n DataBlockIndexType::BinaryAndHash
to use the data block hash index in combination with\n the normal binary search.
\n
\n\nThe hash table utilization ratio is adjustable using set_data_block_hash_ratio
, which is\nvalid only when using DataBlockIndexType::BinaryAndHash
.
\n\nDefault: BinarySearch
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import BlockBasedOptions, BlockBasedIndexType, Options\n\nopts = Options()\nblock_opts = BlockBasedOptions()\nblock_opts.set_data_block_index_type(DataBlockIndexType.binary_and_hash())\nblock_opts.set_data_block_hash_ratio(0.85)\nopts.set_block_based_table_factory(block_opts)\n
\n
\n", "signature": "(self, /, index_type):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_data_block_hash_ratio", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_data_block_hash_ratio", "kind": "function", "doc": "Set the data block hash index utilization ratio.
\n\nThe smaller the utilization ratio, the less hash collisions happen, and so reduce the risk for a\npoint lookup to fall back to binary search due to the collisions. A small ratio means faster\nlookup at the price of more space overhead.
\n\nDefault: 0.75
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_checksum_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_checksum_type", "kind": "function", "doc": "Use the specified checksum type.\nNewly created table files will be protected with this checksum type.\nOld table files will still be readable, even though they have different checksum type.
\n", "signature": "(self, /, checksum_type):", "funcdef": "def"}, {"fullname": "rocksdict.PlainTableFactoryOptions", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions", "kind": "class", "doc": "Used with DBOptions::set_plain_table_factory.\nSee official wiki for more\ninformation.
\n\nDefaults:
\n\n\n user_key_length: 0 (variable length)\n bloom_bits_per_key: 10\n hash_table_ratio: 0.75\n index_sparseness: 16
\n
\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.__init__", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.CuckooTableOptions", "modulename": "rocksdict", "qualname": "CuckooTableOptions", "kind": "class", "doc": "Configuration of cuckoo-based storage.
\n"}, {"fullname": "rocksdict.CuckooTableOptions.__init__", "modulename": "rocksdict", "qualname": "CuckooTableOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.CuckooTableOptions.set_hash_ratio", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_hash_ratio", "kind": "function", "doc": "Determines the utilization of hash tables. Smaller values\nresult in larger hash tables with fewer collisions.\nDefault: 0.9
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_max_search_depth", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_max_search_depth", "kind": "function", "doc": "A property used by builder to determine the depth to go to\nto search for a path to displace elements in case of\ncollision. See Builder.MakeSpaceForKey method. Higher\nvalues result in more efficient hash tables with fewer\nlookups but take more time to build.\nDefault: 100
\n", "signature": "(self, /, depth):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_cuckoo_block_size", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_cuckoo_block_size", "kind": "function", "doc": "In case of collision while inserting, the builder\nattempts to insert in the next cuckoo_block_size\nlocations before skipping over to the next Cuckoo hash\nfunction. This makes lookups more cache friendly in case\nof collisions.\nDefault: 5
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_identity_as_first_hash", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_identity_as_first_hash", "kind": "function", "doc": "If this option is enabled, user key is treated as uint64_t and its value\nis used as hash value directly. This option changes builder's behavior.\nReader ignore this option and behave according to what specified in\ntable property.\nDefault: false
\n", "signature": "(self, /, flag):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_use_module_hash", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_use_module_hash", "kind": "function", "doc": "If this option is set to true, module is used during hash calculation.\nThis often yields better space efficiency at the cost of performance.\nIf this option is set to false, # of entries in table is constrained to\nbe power of two, and bit and is used to calculate hash, which is faster in general.\nDefault: true
\n", "signature": "(self, /, flag):", "funcdef": "def"}, {"fullname": "rocksdict.UniversalCompactOptions", "modulename": "rocksdict", "qualname": "UniversalCompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.UniversalCompactOptions.__init__", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.UniversalCompactOptions.max_size_amplification_percent", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.max_size_amplification_percent", "kind": "variable", "doc": "sets the size amplification.
\n\nIt is defined as the amount (in percentage) of\nadditional storage needed to store a single byte of data in the database.\nFor example, a size amplification of 2% means that a database that\ncontains 100 bytes of user-data may occupy upto 102 bytes of\nphysical storage. By this definition, a fully compacted database has\na size amplification of 0%. Rocksdb uses the following heuristic\nto calculate size amplification: it assumes that all files excluding\nthe earliest file contribute to the size amplification.
\n\nDefault: 200, which means that a 100 byte database could require upto 300 bytes of storage.
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.compression_size_percent", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.compression_size_percent", "kind": "variable", "doc": "Sets the percentage of compression size.
\n\nIf this option is set to be -1, all the output files\nwill follow compression type specified.
\n\nIf this option is not negative, we will try to make sure compressed\nsize is just above this value. In normal cases, at least this percentage\nof data will be compressed.\nWhen we are compacting to a new file, here is the criteria whether\nit needs to be compressed: assuming here are the list of files sorted\nby generation time:\n A1...An B1...Bm C1...Ct\nwhere A1 is the newest and Ct is the oldest, and we are going to compact\nB1...Bm, we calculate the total size of all the files as total_size, as\nwell as the total size of C1...Ct as total_C, the compaction output file\nwill be compressed iff\n total_C / total_size < this percentage
\n\nDefault: -1
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.size_ratio", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.size_ratio", "kind": "variable", "doc": "Sets the percentage flexibility while comparing file size.\nIf the candidate file(s) size is 1% smaller than the next file's size,\nthen include next file into this candidate set.
\n\nDefault: 1
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.stop_style", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.stop_style", "kind": "variable", "doc": "Sets the algorithm used to stop picking files into a single compaction run.
\n\nDefault: ::Total
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.max_merge_width", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.max_merge_width", "kind": "variable", "doc": "Sets the maximum number of files in a single compaction run.
\n\nDefault: UINT_MAX
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.min_merge_width", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.min_merge_width", "kind": "variable", "doc": "Sets the minimum number of files in a single compaction run.
\n\nDefault: 2
\n"}, {"fullname": "rocksdict.UniversalCompactionStopStyle", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.UniversalCompactionStopStyle.__init__", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.UniversalCompactionStopStyle.similar", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle.similar", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.UniversalCompactionStopStyle.total", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle.total", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform", "modulename": "rocksdict", "qualname": "SliceTransform", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.SliceTransform.__init__", "modulename": "rocksdict", "qualname": "SliceTransform.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.SliceTransform.create_fixed_prefix", "modulename": "rocksdict", "qualname": "SliceTransform.create_fixed_prefix", "kind": "function", "doc": "\n", "signature": "(len):", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform.create_max_len_prefix", "modulename": "rocksdict", "qualname": "SliceTransform.create_max_len_prefix", "kind": "function", "doc": "prefix max length at len
. If key is longer than len
,\nthe prefix will have length len
, if key is shorter than len
,\nthe prefix will have the same length as len
.
\n", "signature": "(len):", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform.create_noop", "modulename": "rocksdict", "qualname": "SliceTransform.create_noop", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DataBlockIndexType", "modulename": "rocksdict", "qualname": "DataBlockIndexType", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.DataBlockIndexType.__init__", "modulename": "rocksdict", "qualname": "DataBlockIndexType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.DataBlockIndexType.binary_search", "modulename": "rocksdict", "qualname": "DataBlockIndexType.binary_search", "kind": "function", "doc": "Use binary search when performing point lookup for keys in data blocks.\nThis is the default.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DataBlockIndexType.binary_and_hash", "modulename": "rocksdict", "qualname": "DataBlockIndexType.binary_and_hash", "kind": "function", "doc": "Appends a compact hash table to the end of the data block for efficient indexing. Backwards\ncompatible with databases created without this feature. Once turned on, existing data will\nbe gradually converted to the hash index format.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType", "modulename": "rocksdict", "qualname": "BlockBasedIndexType", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.BlockBasedIndexType.__init__", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.BlockBasedIndexType.binary_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.binary_search", "kind": "function", "doc": "A space efficient index block that is optimized for\nbinary-search-based index.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType.hash_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.hash_search", "kind": "function", "doc": "The hash index, if enabled, will perform a hash lookup if\na prefix extractor has been provided through Options::set_prefix_extractor.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType.two_level_index_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.two_level_index_search", "kind": "function", "doc": "A two-level index implementation. Both levels are binary search indexes.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Cache", "modulename": "rocksdict", "qualname": "Cache", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.Cache.__init__", "modulename": "rocksdict", "qualname": "Cache.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.Cache.new_hyper_clock_cache", "modulename": "rocksdict", "qualname": "Cache.new_hyper_clock_cache", "kind": "function", "doc": "Creates a HyperClockCache with capacity in bytes.
\n\nestimated_entry_charge
is an important tuning parameter. The optimal\nchoice at any given time is\n(cache.get_usage() - 64 * cache.get_table_address_count()) /\ncache.get_occupancy_count()
, or approximately cache.get_usage() /\ncache.get_occupancy_count()
.
\n\nHowever, the value cannot be changed dynamically, so as the cache\ncomposition changes at runtime, the following tradeoffs apply:
\n\n\n- If the estimate is substantially too high (e.g., 25% higher),\nthe cache may have to evict entries to prevent load factors that\nwould dramatically affect lookup times.
\n- If the estimate is substantially too low (e.g., less than half),\nthen meta data space overhead is substantially higher.
\n
\n\nThe latter is generally preferable, and picking the larger of\nblock size and meta data block size is a reasonable choice that\nerrs towards this side.
\n", "signature": "(capacity, estimated_entry_charge):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.get_usage", "modulename": "rocksdict", "qualname": "Cache.get_usage", "kind": "function", "doc": "Returns the Cache memory usage
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.get_pinned_usage", "modulename": "rocksdict", "qualname": "Cache.get_pinned_usage", "kind": "function", "doc": "Returns pinned memory usage
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.set_capacity", "modulename": "rocksdict", "qualname": "Cache.set_capacity", "kind": "function", "doc": "Sets cache capacity
\n", "signature": "(self, /, capacity):", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType", "modulename": "rocksdict", "qualname": "ChecksumType", "kind": "class", "doc": "Used by BlockBasedOptions::set_checksum_type.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- NoChecksum
\n- CRC32c
\n- XXHash
\n- XXHash64
\n- XXH3
\n
\n"}, {"fullname": "rocksdict.ChecksumType.__init__", "modulename": "rocksdict", "qualname": "ChecksumType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.ChecksumType.no_checksum", "modulename": "rocksdict", "qualname": "ChecksumType.no_checksum", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.crc32c", "modulename": "rocksdict", "qualname": "ChecksumType.crc32c", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxhash", "modulename": "rocksdict", "qualname": "ChecksumType.xxhash", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxhash64", "modulename": "rocksdict", "qualname": "ChecksumType.xxhash64", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxh3", "modulename": "rocksdict", "qualname": "ChecksumType.xxh3", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle", "modulename": "rocksdict", "qualname": "DBCompactionStyle", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- Level
\n- Universal
\n- Fifo
\n
\n\nBelow is an example to set compaction style to Fifo.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_compaction_style(DBCompactionStyle.fifo())\n
\n
\n"}, {"fullname": "rocksdict.DBCompactionStyle.__init__", "modulename": "rocksdict", "qualname": "DBCompactionStyle.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.DBCompactionStyle.level", "modulename": "rocksdict", "qualname": "DBCompactionStyle.level", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle.universal", "modulename": "rocksdict", "qualname": "DBCompactionStyle.universal", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle.fifo", "modulename": "rocksdict", "qualname": "DBCompactionStyle.fifo", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType", "modulename": "rocksdict", "qualname": "DBCompressionType", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- None
\n- Snappy
\n- Zlib
\n- Bz2
\n- Lz4
\n- Lz4hc
\n- Zstd
\n
\n\nBelow is an example to set compression type to Snappy.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_compression_type(DBCompressionType.snappy())\n
\n
\n"}, {"fullname": "rocksdict.DBCompressionType.__init__", "modulename": "rocksdict", "qualname": "DBCompressionType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.DBCompressionType.none", "modulename": "rocksdict", "qualname": "DBCompressionType.none", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.snappy", "modulename": "rocksdict", "qualname": "DBCompressionType.snappy", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.zlib", "modulename": "rocksdict", "qualname": "DBCompressionType.zlib", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.bz2", "modulename": "rocksdict", "qualname": "DBCompressionType.bz2", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.lz4", "modulename": "rocksdict", "qualname": "DBCompressionType.lz4", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.lz4hc", "modulename": "rocksdict", "qualname": "DBCompressionType.lz4hc", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.zstd", "modulename": "rocksdict", "qualname": "DBCompressionType.zstd", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode", "modulename": "rocksdict", "qualname": "DBRecoveryMode", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCalling the corresponding functions of each\nto get one of the following.
\n\n\n- TolerateCorruptedTailRecords
\n- AbsoluteConsistency
\n- PointInTime
\n- SkipAnyCorruptedRecord
\n
\n\nBelow is an example to set recovery mode to PointInTime.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_wal_recovery_mode(DBRecoveryMode.point_in_time())\n
\n
\n"}, {"fullname": "rocksdict.DBRecoveryMode.__init__", "modulename": "rocksdict", "qualname": "DBRecoveryMode.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.DBRecoveryMode.tolerate_corrupted_tail_records", "modulename": "rocksdict", "qualname": "DBRecoveryMode.tolerate_corrupted_tail_records", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.absolute_consistency", "modulename": "rocksdict", "qualname": "DBRecoveryMode.absolute_consistency", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.point_in_time", "modulename": "rocksdict", "qualname": "DBRecoveryMode.point_in_time", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.skip_any_corrupted_record", "modulename": "rocksdict", "qualname": "DBRecoveryMode.skip_any_corrupted_record", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Env", "modulename": "rocksdict", "qualname": "Env", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.Env.__init__", "modulename": "rocksdict", "qualname": "Env.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.Env.mem_env", "modulename": "rocksdict", "qualname": "Env.mem_env", "kind": "function", "doc": "Returns a new environment that stores its data in memory and delegates\nall non-file-storage tasks to base_env.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_background_threads", "modulename": "rocksdict", "qualname": "Env.set_background_threads", "kind": "function", "doc": "Sets the number of background worker threads of a specific thread pool for this environment.\nLOW
is the default pool.
\n\nDefault: 1
\n", "signature": "(self, /, num_threads):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_high_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_high_priority_background_threads", "kind": "function", "doc": "Sets the size of the high priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_low_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_low_priority_background_threads", "kind": "function", "doc": "Sets the size of the low priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_bottom_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_bottom_priority_background_threads", "kind": "function", "doc": "Sets the size of the bottom priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.join_all_threads", "modulename": "rocksdict", "qualname": "Env.join_all_threads", "kind": "function", "doc": "Wait for all threads started by StartThread to terminate.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_thread_pool_io_priority", "modulename": "rocksdict", "qualname": "Env.lower_thread_pool_io_priority", "kind": "function", "doc": "Lowering IO priority for threads from the specified pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_high_priority_thread_pool_io_priority", "modulename": "rocksdict", "qualname": "Env.lower_high_priority_thread_pool_io_priority", "kind": "function", "doc": "Lowering IO priority for high priority thread pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_thread_pool_cpu_priority", "modulename": "rocksdict", "qualname": "Env.lower_thread_pool_cpu_priority", "kind": "function", "doc": "Lowering CPU priority for threads from the specified pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_high_priority_thread_pool_cpu_priority", "modulename": "rocksdict", "qualname": "Env.lower_high_priority_thread_pool_cpu_priority", "kind": "function", "doc": "Lowering CPU priority for high priority thread pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.FifoCompactOptions", "modulename": "rocksdict", "qualname": "FifoCompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.FifoCompactOptions.__init__", "modulename": "rocksdict", "qualname": "FifoCompactOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.FifoCompactOptions.max_table_files_size", "modulename": "rocksdict", "qualname": "FifoCompactOptions.max_table_files_size", "kind": "variable", "doc": "Sets the max table file size.
\n\nOnce the total sum of table files reaches this, we will delete the oldest\ntable file
\n\nDefault: 1GB
\n"}, {"fullname": "rocksdict.CompactOptions", "modulename": "rocksdict", "qualname": "CompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.CompactOptions.__init__", "modulename": "rocksdict", "qualname": "CompactOptions.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.CompactOptions.set_exclusive_manual_compaction", "modulename": "rocksdict", "qualname": "CompactOptions.set_exclusive_manual_compaction", "kind": "function", "doc": "If more than one thread calls manual compaction,\nonly one will actually schedule it while the other threads will simply wait\nfor the scheduled manual compaction to complete. If exclusive_manual_compaction\nis set to true, the call will disable scheduling of automatic compaction jobs\nand wait for existing automatic compaction jobs to finish.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_bottommost_level_compaction", "modulename": "rocksdict", "qualname": "CompactOptions.set_bottommost_level_compaction", "kind": "function", "doc": "Sets bottommost level compaction.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_change_level", "modulename": "rocksdict", "qualname": "CompactOptions.set_change_level", "kind": "function", "doc": "If true, compacted files will be moved to the minimum level capable\nof holding the data or given level (specified non-negative target_level).
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_target_level", "modulename": "rocksdict", "qualname": "CompactOptions.set_target_level", "kind": "function", "doc": "If change_level is true and target_level have non-negative value, compacted\nfiles will be moved to target_level.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.BottommostLevelCompaction.__init__", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.BottommostLevelCompaction.skip", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.skip", "kind": "function", "doc": "Skip bottommost level compaction
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.if_have_compaction_filter", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.if_have_compaction_filter", "kind": "function", "doc": "Only compact bottommost level if there is a compaction filter\nThis is the default option
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.force", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.force", "kind": "function", "doc": "Always compact bottommost level
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.force_optimized", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.force_optimized", "kind": "function", "doc": "Always compact bottommost level but in bottommost level avoid\ndouble-compacting files created in the same compaction
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.KeyEncodingType", "modulename": "rocksdict", "qualname": "KeyEncodingType", "kind": "class", "doc": "Used in PlainTableFactoryOptions
.
\n"}, {"fullname": "rocksdict.KeyEncodingType.__init__", "modulename": "rocksdict", "qualname": "KeyEncodingType.__init__", "kind": "function", "doc": "\n", "signature": "()"}, {"fullname": "rocksdict.KeyEncodingType.plain", "modulename": "rocksdict", "qualname": "KeyEncodingType.plain", "kind": "function", "doc": "Always write full keys.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.KeyEncodingType.prefix", "modulename": "rocksdict", "qualname": "KeyEncodingType.prefix", "kind": "function", "doc": "Find opportunities to write the same prefix for multiple rows.
\n", "signature": "():", "funcdef": "def"}];
+ /** pdoc search index */const docs = [{"fullname": "rocksdict", "modulename": "rocksdict", "kind": "module", "doc": "Abstract
\n\nThis package enables users to store, query, and delete\na large number of key-value pairs on disk.
\n\nThis is especially useful when the data cannot fit into RAM.\nIf you have hundreds of GBs or many TBs of key-value data to store\nand query from, this is the package for you.
\n\nInstallation
\n\nThis package is built for macOS (x86/arm), Windows 64/32, and Linux x86/arm.\nIt can be installed from pypi with pip install rocksdict
.
\n\nIntroduction
\n\nBelow is a code example that shows how to do the following:
\n\n\n- Create Rdict
\n- Store something on disk
\n- Close Rdict
\n- Open Rdict again
\n- Check Rdict elements
\n- Iterate from Rdict
\n- Batch get
\n- Delete storage
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options\n\npath = str(\"./test_dict\")\n\n# create a Rdict with default options at `path`\ndb = Rdict(path)\n\n# storing numbers\ndb[1.0] = 1\ndb[1] = 1.0\ndb[\"huge integer\"] = 2343546543243564534233536434567543\ndb[\"good\"] = True\ndb[\"bad\"] = False\ndb[\"bytes\"] = b\"bytes\"\ndb[\"this is a list\"] = [1, 2, 3]\ndb[\"store a dict\"] = {0: 1}\n\n# for example numpy array\nimport numpy as np\nimport pandas as pd\ndb[b\"numpy\"] = np.array([1, 2, 3])\ndb[\"a table\"] = pd.DataFrame({\"a\": [1, 2], \"b\": [2, 1]})\n\n# close Rdict\ndb.close()\n\n# reopen Rdict from disk\ndb = Rdict(path)\nassert db[1.0] == 1\nassert db[1] == 1.0\nassert db[\"huge integer\"] == 2343546543243564534233536434567543\nassert db[\"good\"] == True\nassert db[\"bad\"] == False\nassert db[\"bytes\"] == b\"bytes\"\nassert db[\"this is a list\"] == [1, 2, 3]\nassert db[\"store a dict\"] == {0: 1}\nassert np.all(db[b\"numpy\"] == np.array([1, 2, 3]))\nassert np.all(db[\"a table\"] == pd.DataFrame({\"a\": [1, 2], \"b\": [2, 1]}))\n\n# iterate through all elements\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# batch get:\nprint(db[[\"good\", \"bad\", 1.0]])\n# [True, False, 1]\n\n# delete Rdict from dict\ndb.close()\nRdict.destroy(path)\n
\n
\n\nSupported types:
\n\n\n- key:
int, float, bool, str, bytes
\n- value:
int, float, bool, str, bytes
and anything that\nsupports pickle
. \n
\n"}, {"fullname": "rocksdict.Rdict", "modulename": "rocksdict", "qualname": "Rdict", "kind": "class", "doc": "A persistent on-disk dictionary. Supports string, int, float, bytes as key, values.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"./test_dir\")\ndb[0] = 1\n\ndb = None\ndb = Rdict(\"./test_dir\")\nassert(db[0] == 1)\n
\n
\n\nArguments:
\n\n\n- path (str): path to the database
\n- options (Options): Options object
\n- column_families (dict): (name, options) pairs, these
Options
\nmust have the same raw_mode
argument as the main Options
.\nA column family called 'default' is always created. \n- access_type (AccessType): there are four access types:\nReadWrite, ReadOnly, WithTTL, and Secondary, use\nAccessType class to create.
\n
\n"}, {"fullname": "rocksdict.Rdict.set_dumps", "modulename": "rocksdict", "qualname": "Rdict.set_dumps", "kind": "function", "doc": "set custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_loads", "modulename": "rocksdict", "qualname": "Rdict.set_loads", "kind": "function", "doc": "set custom loads function
\n", "signature": "(self, /, loads):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_write_options", "modulename": "rocksdict", "qualname": "Rdict.set_write_options", "kind": "function", "doc": "Optionally disable WAL or sync for this write.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, WriteBatch, WriteOptions\n\npath = \"_path_for_rocksdb_storageY1\"\ndb = Rdict(path)\n\n# set write options\nwrite_options = WriteOptions()\nwrite_options.set_sync(False)\nwrite_options.disable_wal(True)\ndb.set_write_options(write_options)\n\n# write to db\ndb[\"my key\"] = \"my value\"\ndb[\"key2\"] = \"value2\"\ndb[\"key3\"] = \"value3\"\n\n# remove db\ndel db\nRdict.destroy(path)\n
\n
\n", "signature": "(self, /, write_opt):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_read_options", "modulename": "rocksdict", "qualname": "Rdict.set_read_options", "kind": "function", "doc": "Configure Read Options for all the get operations.
\n", "signature": "(self, /, read_opt):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get", "modulename": "rocksdict", "qualname": "Rdict.get", "kind": "function", "doc": "Get value from key or a list of keys.
\n\nArguments:
\n\n\n- key: a single key or list of keys.
\n- default: the default value to return if key not found.
\n- read_opt: override preset read options\n(or use Rdict.set_read_options to preset a read options used by default).
\n
\n\nReturns:
\n\n\n None or default value if the key does not exist.
\n
\n", "signature": "(self, /, key, default=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.put", "modulename": "rocksdict", "qualname": "Rdict.put", "kind": "function", "doc": "Insert key value into database.
\n\nArguments:
\n\n\n- key: the key.
\n- value: the value.
\n- write_opt: override preset write options\n(or use Rdict.set_write_options to preset a write options used by default).
\n
\n", "signature": "(self, /, key, value, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.key_may_exist", "modulename": "rocksdict", "qualname": "Rdict.key_may_exist", "kind": "function", "doc": "Check if a key may exist without doing any IO.
\n\nNotes:
\n\n\n If the key definitely does not exist in the database,\n then this method returns False, else True.\n If the caller wants to obtain value when the key is found in memory,\n fetch should be set to True.\n This check is potentially lighter-weight than invoking DB::get().\n One way to make this lighter weight is to avoid doing any IOs.
\n \n The API follows the following principle:
\n \n \n - True, and value found => the key must exist.
\n - True => the key may or may not exist.
\n - False => the key definitely does not exist.
\n
\n \n Flip it around:
\n \n \n - key exists => must return True, but value may or may not be found.
\n - key doesn't exists => might still return True.
\n
\n
\n\nArguments:
\n\n\n- key: Key to check
\n- read_opt: ReadOptions
\n
\n\nReturns:
\n\n\n if fetch = False
,\n returning True implies that the key may exist.\n returning False implies that the key definitely does not exist.\n if fetch = True
,\n returning (True, value) implies that the key is found and definitely exist.\n returning (False, None) implies that the key definitely does not exist.\n returning (True, None) implies that the key may exist.
\n
\n", "signature": "(self, /, key, fetch=False, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.delete", "modulename": "rocksdict", "qualname": "Rdict.delete", "kind": "function", "doc": "Delete entry from the database.
\n\nArguments:
\n\n\n- key: the key.
\n- write_opt: override preset write options\n(or use Rdict.set_write_options to preset a write options used by default).
\n
\n", "signature": "(self, /, key, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.iter", "modulename": "rocksdict", "qualname": "Rdict.iter", "kind": "function", "doc": "Reversible for iterating over keys and values.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage5\"\ndb = Rdict(path)\n\nfor i in range(50):\n db[i] = i ** 2\n\niter = db.iter()\n\niter.seek_to_first()\n\nj = 0\nwhile iter.valid():\n assert iter.key() == j\n assert iter.value() == j ** 2\n print(f\"{iter.key()} {iter.value()}\")\n iter.next()\n j += 1\n\niter.seek_to_first();\nassert iter.key() == 0\nassert iter.value() == 0\nprint(f\"{iter.key()} {iter.value()}\")\n\niter.seek(25)\nassert iter.key() == 25\nassert iter.value() == 625\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path)\n
\n
\n\nArguments:
\n\n\n- read_opt: ReadOptions
\n
\n\nReturns: Reversible
\n", "signature": "(self, /, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.items", "modulename": "rocksdict", "qualname": "Rdict.items", "kind": "function", "doc": "Iterate through all keys and values pairs.
\n\nExamples:
\n\n\n ::
\n\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions
\n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.keys", "modulename": "rocksdict", "qualname": "Rdict.keys", "kind": "function", "doc": "Iterate through all keys
\n\nExamples:
\n\n\n ::
\n\nall_keys = [k for k in db.keys()]\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions
\n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.values", "modulename": "rocksdict", "qualname": "Rdict.values", "kind": "function", "doc": "Iterate through all values.
\n\nExamples:
\n\n\n ::
\n\nall_keys = [v for v in db.values()]\n
\n
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.flush", "modulename": "rocksdict", "qualname": "Rdict.flush", "kind": "function", "doc": "Manually flush the current column family.
\n\nNotes:
\n\n\n Manually call mem-table flush.\n It is recommended to call flush() or close() before\n stopping the python program, to ensure that all written\n key-value pairs have been flushed to the disk.
\n
\n\nArguments:
\n\n\n- wait (bool): whether to wait for the flush to finish.
\n
\n", "signature": "(self, /, wait=True):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.flush_wal", "modulename": "rocksdict", "qualname": "Rdict.flush_wal", "kind": "function", "doc": "Flushes the WAL buffer. If sync
is set to true
, also syncs\nthe data to disk.
\n", "signature": "(self, /, sync=True):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.create_column_family", "modulename": "rocksdict", "qualname": "Rdict.create_column_family", "kind": "function", "doc": "Creates column family with given name and options.
\n\nArguments:
\n\n\n- name: name of this column family
\n- options: Rdict Options for this column family
\n
\n\nReturn:
\n\n\n the newly created column family
\n
\n", "signature": "(self, /, name, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.drop_column_family", "modulename": "rocksdict", "qualname": "Rdict.drop_column_family", "kind": "function", "doc": "Drops the column family with the given name
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get_column_family", "modulename": "rocksdict", "qualname": "Rdict.get_column_family", "kind": "function", "doc": "Get a column family Rdict
\n\nArguments:
\n\n\n- name: name of this column family
\n- options: Rdict Options for this column family
\n
\n\nReturn:
\n\n\n the column family Rdict of this name
\n
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.get_column_family_handle", "modulename": "rocksdict", "qualname": "Rdict.get_column_family_handle", "kind": "function", "doc": "Use this method to obtain a ColumnFamily instance, which can be used in WriteBatch.
\n\nExample:
\n\n\n ::
\n\nwb = WriteBatch()\nfor i in range(100):\n wb.put(i, i**2, db.get_column_family_handle(cf_name_1))\ndb.write(wb)\n\nwb = WriteBatch()\nwb.set_default_column_family(db.get_column_family_handle(cf_name_2))\nfor i in range(100, 200):\n wb[i] = i**2\ndb.write(wb)\n
\n
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.snapshot", "modulename": "rocksdict", "qualname": "Rdict.snapshot", "kind": "function", "doc": "A snapshot of the current column family.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"tmp\")\nfor i in range(100):\n db[i] = i\n\n# take a snapshot\nsnapshot = db.snapshot()\n\nfor i in range(90):\n del db[i]\n\n# 0-89 are no longer in db\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# but they are still in the snapshot\nfor i in range(100):\n assert snapshot[i] == i\n\n# drop the snapshot\ndel snapshot, db\n\nRdict.destroy(\"tmp\")\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.ingest_external_file", "modulename": "rocksdict", "qualname": "Rdict.ingest_external_file", "kind": "function", "doc": "Loads a list of external SST files created with SstFileWriter\ninto the current column family.
\n\nArguments:
\n\n\n- paths: a list a paths
\n- opts: IngestExternalFileOptionsPy instance
\n
\n", "signature": "(self, /, paths, opts=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.try_catch_up_with_primary", "modulename": "rocksdict", "qualname": "Rdict.try_catch_up_with_primary", "kind": "function", "doc": "Tries to catch up with the primary by reading as much as possible from the\nlog files.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.cancel_all_background", "modulename": "rocksdict", "qualname": "Rdict.cancel_all_background", "kind": "function", "doc": "Request stopping background work, if wait is true wait until it's done.
\n", "signature": "(self, /, wait):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.write", "modulename": "rocksdict", "qualname": "Rdict.write", "kind": "function", "doc": "WriteBatch
\n\nNotes:
\n\n\n This WriteBatch does not write to the current column family.
\n
\n\nArguments:
\n\n\n- write_batch: WriteBatch instance. This instance will be consumed.
\n- write_opt: use default value if not provided.
\n
\n", "signature": "(self, /, write_batch, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.delete_range", "modulename": "rocksdict", "qualname": "Rdict.delete_range", "kind": "function", "doc": "Removes the database entries in the range [\"from\", \"to\")
of the current column family.
\n\nArguments:
\n\n\n- begin: included
\n- end: excluded
\n- write_opt: WriteOptions
\n
\n", "signature": "(self, /, begin, end, write_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.close", "modulename": "rocksdict", "qualname": "Rdict.close", "kind": "function", "doc": "Flush memory to disk, and drop the current column family.
\n\nNotes:
\n\n\n Calling db.close()
is nearly equivalent to first calling\n db.flush()
and then del db
. However, db.close()
does\n not guarantee the underlying RocksDB to be actually closed.\n Other Column Family Rdict
instances, ColumnFamily
\n (cf handle) instances, iterator instances such asRdictIter
,\n RdictItems
, RdictKeys
, RdictValues
can all keep RocksDB\n alive. del
or close
all associated instances mentioned\n above to actually shut down RocksDB.
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.path", "modulename": "rocksdict", "qualname": "Rdict.path", "kind": "function", "doc": "Return current database path.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.compact_range", "modulename": "rocksdict", "qualname": "Rdict.compact_range", "kind": "function", "doc": "Runs a manual compaction on the Range of keys given for the current Column Family.
\n", "signature": "(self, /, begin, end, compact_opt=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.set_options", "modulename": "rocksdict", "qualname": "Rdict.set_options", "kind": "function", "doc": "Set options for the current column family.
\n", "signature": "(self, /, options):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.property_value", "modulename": "rocksdict", "qualname": "Rdict.property_value", "kind": "function", "doc": "Retrieves a RocksDB property by name, for the current column family.
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.property_int_value", "modulename": "rocksdict", "qualname": "Rdict.property_int_value", "kind": "function", "doc": "Retrieves a RocksDB property and casts it to an integer\n(for the current column family).
\n\nFull list of properties that return int values could be find\nhere.
\n", "signature": "(self, /, name):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.latest_sequence_number", "modulename": "rocksdict", "qualname": "Rdict.latest_sequence_number", "kind": "function", "doc": "The sequence number of the most recent transaction.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.live_files", "modulename": "rocksdict", "qualname": "Rdict.live_files", "kind": "function", "doc": "Returns a list of all table files with their level, start key and end key
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.destroy", "modulename": "rocksdict", "qualname": "Rdict.destroy", "kind": "function", "doc": "Delete the database.
\n\nArguments:
\n\n\n- path (str): path to this database
\n- options (rocksdict.Options): Rocksdb options object
\n
\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.repair", "modulename": "rocksdict", "qualname": "Rdict.repair", "kind": "function", "doc": "Repair the database.
\n\nArguments:
\n\n\n- path (str): path to this database
\n- options (rocksdict.Options): Rocksdb options object
\n
\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Rdict.list_cf", "modulename": "rocksdict", "qualname": "Rdict.list_cf", "kind": "function", "doc": "\n", "signature": "(path, options=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch", "modulename": "rocksdict", "qualname": "WriteBatch", "kind": "class", "doc": "WriteBatch class. Use db.write() to ingest WriteBatch.
\n\nNotes:
\n\n\n A WriteBatch instance can only be ingested once,\n otherwise an Exception will be raised.
\n
\n\nArguments:
\n\n\n- raw_mode (bool): make sure that this is consistent with the Rdict.
\n
\n"}, {"fullname": "rocksdict.WriteBatch.set_dumps", "modulename": "rocksdict", "qualname": "WriteBatch.set_dumps", "kind": "function", "doc": "change to a custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.set_default_column_family", "modulename": "rocksdict", "qualname": "WriteBatch.set_default_column_family", "kind": "function", "doc": "Set the default item for a[i] = j
and del a[i]
syntax.
\n\nYou can also use put(key, value, column_family)
to explicitly choose column family.
\n\nArguments:
\n\n\n- - column_family (ColumnFamily | None): column family descriptor or None (for default family).
\n
\n", "signature": "(self, /, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.len", "modulename": "rocksdict", "qualname": "WriteBatch.len", "kind": "function", "doc": "length of the batch
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.size_in_bytes", "modulename": "rocksdict", "qualname": "WriteBatch.size_in_bytes", "kind": "function", "doc": "Return WriteBatch serialized size (in bytes).
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.is_empty", "modulename": "rocksdict", "qualname": "WriteBatch.is_empty", "kind": "function", "doc": "Check whether the batch is empty.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.put", "modulename": "rocksdict", "qualname": "WriteBatch.put", "kind": "function", "doc": "Insert a value into the database under the given key.
\n\nArguments:
\n\n\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, key, value, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.delete", "modulename": "rocksdict", "qualname": "WriteBatch.delete", "kind": "function", "doc": "Removes the database entry for key. Does nothing if the key was not found.
\n\nArguments:
\n\n\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, key, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.delete_range", "modulename": "rocksdict", "qualname": "WriteBatch.delete_range", "kind": "function", "doc": "Remove database entries in column family from start key to end key.
\n\nNotes:
\n\n\n Removes the database entries in the range [\"begin_key\", \"end_key\"), i.e.,\n including \"begin_key\" and excluding \"end_key\". It is not an error if no\n keys exist in the range [\"begin_key\", \"end_key\").
\n
\n\nArguments:
\n\n\n- begin: begin key
\n- end: end key
\n- column_family: override the default column family set by set_default_column_family
\n
\n", "signature": "(self, /, begin, end, column_family=None):", "funcdef": "def"}, {"fullname": "rocksdict.WriteBatch.clear", "modulename": "rocksdict", "qualname": "WriteBatch.clear", "kind": "function", "doc": "Clear all updates buffered in this batch.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter", "modulename": "rocksdict", "qualname": "SstFileWriter", "kind": "class", "doc": "SstFileWriter is used to create sst files that can be added to database later\nAll keys in files generated by SstFileWriter will have sequence number = 0.
\n\nArguments:
\n\n\n- options: this options must have the same
raw_mode
as the Rdict DB. \n
\n"}, {"fullname": "rocksdict.SstFileWriter.set_dumps", "modulename": "rocksdict", "qualname": "SstFileWriter.set_dumps", "kind": "function", "doc": "set custom dumps function
\n", "signature": "(self, /, dumps):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.open", "modulename": "rocksdict", "qualname": "SstFileWriter.open", "kind": "function", "doc": "Prepare SstFileWriter to write into file located at \"file_path\".
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.finish", "modulename": "rocksdict", "qualname": "SstFileWriter.finish", "kind": "function", "doc": "Finalize writing to sst file and close file.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.SstFileWriter.file_size", "modulename": "rocksdict", "qualname": "SstFileWriter.file_size", "kind": "function", "doc": "returns the current file size
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType", "modulename": "rocksdict", "qualname": "AccessType", "kind": "class", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n"}, {"fullname": "rocksdict.AccessType.read_write", "modulename": "rocksdict", "qualname": "AccessType.read_write", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.read_only", "modulename": "rocksdict", "qualname": "AccessType.read_only", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(error_if_log_file_exist=False):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.secondary", "modulename": "rocksdict", "qualname": "AccessType.secondary", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(secondary_path):", "funcdef": "def"}, {"fullname": "rocksdict.AccessType.with_ttl", "modulename": "rocksdict", "qualname": "AccessType.with_ttl", "kind": "function", "doc": "Define DB Access Types.
\n\nNotes:
\n\n\n There are four access types:
\n \n \n - ReadWrite: default value
\n - ReadOnly
\n - WithTTL
\n - Secondary
\n
\n
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict, AccessType\n\n# open with 24 hours ttl\ndb = Rdict(\"./main_path\", access_type = AccessType.with_ttl(24 * 3600))\n\n# open as read_only\ndb = Rdict(\"./main_path\", access_type = AccessType.read_only())\n\n# open as secondary\ndb = Rdict(\"./main_path\", access_type = AccessType.secondary(\"./secondary_path\"))\n
\n
\n", "signature": "(duration):", "funcdef": "def"}, {"fullname": "rocksdict.WriteOptions", "modulename": "rocksdict", "qualname": "WriteOptions", "kind": "class", "doc": "Optionally disable WAL or sync for this write.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, WriteBatch, WriteOptions\n\npath = \"_path_for_rocksdb_storageY1\"\ndb = Rdict(path, Options())\n\n# set write options\nwrite_options = WriteOptions()\nwrite_options.set_sync(false)\nwrite_options.disable_wal(true)\ndb.set_write_options(write_options)\n\n# write to db\ndb[\"my key\"] = \"my value\"\ndb[\"key2\"] = \"value2\"\ndb[\"key3\"] = \"value3\"\n\n# remove db\ndel db\nRdict.destroy(path, Options())\n
\n
\n"}, {"fullname": "rocksdict.WriteOptions.no_slowdown", "modulename": "rocksdict", "qualname": "WriteOptions.no_slowdown", "kind": "variable", "doc": "If true and we need to wait or sleep for the write request, fails\nimmediately with Status::Incomplete().
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.memtable_insert_hint_per_batch", "modulename": "rocksdict", "qualname": "WriteOptions.memtable_insert_hint_per_batch", "kind": "variable", "doc": "If true, writebatch will maintain the last insert positions of each\nmemtable as hints in concurrent write. It can improve write performance\nin concurrent writes if keys in one writebatch are sequential. In\nnon-concurrent writes (when concurrent_memtable_writes is false) this\noption will be ignored.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.ignore_missing_column_families", "modulename": "rocksdict", "qualname": "WriteOptions.ignore_missing_column_families", "kind": "variable", "doc": "If true and if user is trying to write to column families that don't exist (they were dropped),\nignore the write (don't return an error). If there are multiple writes in a WriteBatch,\nother writes will succeed.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.low_pri", "modulename": "rocksdict", "qualname": "WriteOptions.low_pri", "kind": "variable", "doc": "If true, this write request is of lower priority if compaction is\nbehind. In this case, no_slowdown = true, the request will be cancelled\nimmediately with Status::Incomplete() returned. Otherwise, it will be\nslowed down. The slowdown value is determined by RocksDB to guarantee\nit introduces minimum impacts to high priority writes.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.sync", "modulename": "rocksdict", "qualname": "WriteOptions.sync", "kind": "variable", "doc": "Sets the sync mode. If true, the write will be flushed\nfrom the operating system buffer cache before the write is considered complete.\nIf this flag is true, writes will be slower.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.WriteOptions.disable_wal", "modulename": "rocksdict", "qualname": "WriteOptions.disable_wal", "kind": "variable", "doc": "Sets whether WAL should be active or not.\nIf true, writes will not first go to the write ahead log,\nand the write may got lost after a crash.
\n\nDefault: false
\n"}, {"fullname": "rocksdict.Snapshot", "modulename": "rocksdict", "qualname": "Snapshot", "kind": "class", "doc": "A consistent view of the database at the point of creation.
\n\nExamples:
\n\n\n ::
\n\nfrom rocksdict import Rdict\n\ndb = Rdict(\"tmp\")\nfor i in range(100):\n db[i] = i\n\n# take a snapshot\nsnapshot = db.snapshot()\n\nfor i in range(90):\n del db[i]\n\n# 0-89 are no longer in db\nfor k, v in db.items():\n print(f\"{k} -> {v}\")\n\n# but they are still in the snapshot\nfor i in range(100):\n assert snapshot[i] == i\n\n# drop the snapshot\ndel snapshot, db\n\nRdict.destroy(\"tmp\")\n
\n
\n"}, {"fullname": "rocksdict.Snapshot.iter", "modulename": "rocksdict", "qualname": "Snapshot.iter", "kind": "function", "doc": "Creates an iterator over the data in this snapshot under the given column family, using\nthe default read options.
\n\nArguments:
\n\n\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.items", "modulename": "rocksdict", "qualname": "Snapshot.items", "kind": "function", "doc": "Iterate through all keys and values pairs.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.keys", "modulename": "rocksdict", "qualname": "Snapshot.keys", "kind": "function", "doc": "Iterate through all keys.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.Snapshot.values", "modulename": "rocksdict", "qualname": "Snapshot.values", "kind": "function", "doc": "Iterate through all values.
\n\nArguments:
\n\n\n- backwards: iteration direction, forward if
False
. \n- from_key: iterate from key, first seek to this key\nor the nearest next key for iteration\n(depending on iteration direction).
\n- read_opt: ReadOptions, must have the same
raw_mode
argument. \n
\n", "signature": "(self, /, backwards=False, from_key=None, read_opt=None):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter", "modulename": "rocksdict", "qualname": "RdictIter", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.RdictIter.valid", "modulename": "rocksdict", "qualname": "RdictIter.valid", "kind": "function", "doc": "Returns true
if the iterator is valid. An iterator is invalidated when\nit reaches the end of its defined range, or when it encounters an error.
\n\nTo check whether the iterator encountered an error after valid
has\nreturned false
, use the status
method. status
will never\nreturn an error when valid
is true
.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.status", "modulename": "rocksdict", "qualname": "RdictIter.status", "kind": "function", "doc": "Returns an error Result
if the iterator has encountered an error\nduring operation. When an error is encountered, the iterator is\ninvalidated and valid
will return false
when called.
\n\nPerforming a seek will discard the current status.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_to_first", "modulename": "rocksdict", "qualname": "RdictIter.seek_to_first", "kind": "function", "doc": "Seeks to the first key in the database.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage5\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Iterate all keys from the start in lexicographic order\niter.seek_to_first()\n\nwhile iter.valid():\n print(f\"{iter.key()} {iter.value()}\")\n iter.next()\n\n# Read just the first key\niter.seek_to_first();\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_to_last", "modulename": "rocksdict", "qualname": "RdictIter.seek_to_last", "kind": "function", "doc": "Seeks to the last key in the database.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Iterate all keys from the start in lexicographic order\niter.seek_to_last()\n\nwhile iter.valid():\n print(f\"{iter.key()} {iter.value()}\")\n iter.prev()\n\n# Read just the last key\niter.seek_to_last();\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek", "modulename": "rocksdict", "qualname": "RdictIter.seek", "kind": "function", "doc": "Seeks to the specified key or the first key that lexicographically follows it.
\n\nThis method will attempt to seek to the specified key. If that key does not exist, it will\nfind and seek to the key that lexicographically follows it instead.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Read the first string key that starts with 'a'\niter.seek(\"a\");\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.seek_for_prev", "modulename": "rocksdict", "qualname": "RdictIter.seek_for_prev", "kind": "function", "doc": "Seeks to the specified key, or the first key that lexicographically precedes it.
\n\nLike .seek()
this method will attempt to seek to the specified key.\nThe difference with .seek()
is that if the specified key do not exist, this method will\nseek to key that lexicographically precedes it instead.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Rdict, Options, ReadOptions\n\npath = \"_path_for_rocksdb_storage6\"\ndb = Rdict(path, Options())\niter = db.iter(ReadOptions())\n\n# Read the last key that starts with 'a'\nseek_for_prev(\"b\")\nprint(f\"{iter.key()} {iter.value()}\")\n\ndel iter, db\nRdict.destroy(path, Options())\n
\n
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.next", "modulename": "rocksdict", "qualname": "RdictIter.next", "kind": "function", "doc": "Seeks to the next key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.prev", "modulename": "rocksdict", "qualname": "RdictIter.prev", "kind": "function", "doc": "Seeks to the previous key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.key", "modulename": "rocksdict", "qualname": "RdictIter.key", "kind": "function", "doc": "Returns the current key.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.RdictIter.value", "modulename": "rocksdict", "qualname": "RdictIter.value", "kind": "function", "doc": "Returns the current value.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options", "modulename": "rocksdict", "qualname": "Options", "kind": "class", "doc": "Database-wide options around performance and behavior.
\n\nPlease read the official tuning guide\nand most importantly, measure performance under realistic workloads with realistic hardware.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, Rdict, DBCompactionStyle\n\ndef badly_tuned_for_somebody_elses_disk():\n\n path = \"path/for/rocksdb/storageX\"\n\n opts = Options()\n opts.create_if_missing(true)\n opts.set_max_open_files(10000)\n opts.set_use_fsync(false)\n opts.set_bytes_per_sync(8388608)\n opts.optimize_for_point_lookup(1024)\n opts.set_table_cache_num_shard_bits(6)\n opts.set_max_write_buffer_number(32)\n opts.set_write_buffer_size(536870912)\n opts.set_target_file_size_base(1073741824)\n opts.set_min_write_buffer_number_to_merge(4)\n opts.set_level_zero_stop_writes_trigger(2000)\n opts.set_level_zero_slowdown_writes_trigger(0)\n opts.set_compaction_style(DBCompactionStyle.universal())\n opts.set_disable_auto_compactions(true)\n\n return Rdict(path, opts)\n
\n
\n\nArguments:
\n\n\n- raw_mode (bool): set this to True to operate in raw mode (i.e.\nit will only allow bytes as key-value pairs, and is compatible\nwith other RockDB database).
\n
\n"}, {"fullname": "rocksdict.Options.load_latest", "modulename": "rocksdict", "qualname": "Options.load_latest", "kind": "function", "doc": "Load latest options from the rocksdb path
\n\nReturns a tuple, where the first item is Options
\nand the second item is a Dict
of column families.
\n", "signature": "(path, env=Ellipsis, ignore_unknown_options=False, cache=Ellipsis):", "funcdef": "def"}, {"fullname": "rocksdict.Options.increase_parallelism", "modulename": "rocksdict", "qualname": "Options.increase_parallelism", "kind": "function", "doc": "By default, RocksDB uses only one background thread for flush and\ncompaction. Calling this function will set it up such that total of\ntotal_threads
is used. Good value for total_threads
is the number of\ncores. You almost definitely want to call this function if your system is\nbottlenecked by RocksDB.
\n", "signature": "(self, /, parallelism):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_level_style_compaction", "modulename": "rocksdict", "qualname": "Options.optimize_level_style_compaction", "kind": "function", "doc": "Optimize level style compaction.
\n\nDefault values for some parameters in Options
are not optimized for heavy\nworkloads and big datasets, which means you might observe write stalls under\nsome conditions.
\n\nThis can be used as one of the starting points for tuning RocksDB options in\nsuch cases.
\n\nInternally, it sets write_buffer_size
, min_write_buffer_number_to_merge
,\nmax_write_buffer_number
, level0_file_num_compaction_trigger
,\ntarget_file_size_base
, max_bytes_for_level_base
, so it can override if those\nparameters were set before.
\n\nIt sets buffer sizes so that memory consumption would be constrained by\nmemtable_memory_budget
.
\n", "signature": "(self, /, memtable_memory_budget):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_universal_style_compaction", "modulename": "rocksdict", "qualname": "Options.optimize_universal_style_compaction", "kind": "function", "doc": "Optimize universal style compaction.
\n\nDefault values for some parameters in Options
are not optimized for heavy\nworkloads and big datasets, which means you might observe write stalls under\nsome conditions.
\n\nThis can be used as one of the starting points for tuning RocksDB options in\nsuch cases.
\n\nInternally, it sets write_buffer_size
, min_write_buffer_number_to_merge
,\nmax_write_buffer_number
, level0_file_num_compaction_trigger
,\ntarget_file_size_base
, max_bytes_for_level_base
, so it can override if those\nparameters were set before.
\n\nIt sets buffer sizes so that memory consumption would be constrained by\nmemtable_memory_budget
.
\n", "signature": "(self, /, memtable_memory_budget):", "funcdef": "def"}, {"fullname": "rocksdict.Options.create_if_missing", "modulename": "rocksdict", "qualname": "Options.create_if_missing", "kind": "function", "doc": "If true, any column families that didn't exist when opening the database\nwill be created.
\n\nDefault: true
\n", "signature": "(self, /, create_if_missing):", "funcdef": "def"}, {"fullname": "rocksdict.Options.create_missing_column_families", "modulename": "rocksdict", "qualname": "Options.create_missing_column_families", "kind": "function", "doc": "If true, any column families that didn't exist when opening the database\nwill be created.
\n\nDefault: false
\n", "signature": "(self, /, create_missing_cfs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_error_if_exists", "modulename": "rocksdict", "qualname": "Options.set_error_if_exists", "kind": "function", "doc": "Specifies whether an error should be raised if the database already exists.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_paranoid_checks", "modulename": "rocksdict", "qualname": "Options.set_paranoid_checks", "kind": "function", "doc": "Enable/disable paranoid checks.
\n\nIf true, the implementation will do aggressive checking of the\ndata it is processing and will stop early if it detects any\nerrors. This may have unforeseen ramifications: for example, a\ncorruption of one DB entry may cause a large number of entries to\nbecome unreadable or for the entire DB to become unopenable.\nIf any of the writes to the database fails (Put, Delete, Merge, Write),\nthe database will switch to read-only mode and fail all other\nWrite operations.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_paths", "modulename": "rocksdict", "qualname": "Options.set_db_paths", "kind": "function", "doc": "A list of paths where SST files can be put into, with its target size.\nNewer data is placed into paths specified earlier in the vector while\nolder data gradually moves to paths specified later in the vector.
\n\nFor example, you have a flash device with 10GB allocated for the DB,\nas well as a hard drive of 2TB, you should config it to be:\n [{\"/flash_path\", 10GB}, {\"/hard_drive\", 2TB}]
\n\nThe system will try to guarantee data under each path is close to but\nnot larger than the target size. But current and future file sizes used\nby determining where to place a file are based on best-effort estimation,\nwhich means there is a chance that the actual size under the directory\nis slightly more than target size under some workloads. User should give\nsome buffer room for those cases.
\n\nIf none of the paths has sufficient room to place a file, the file will\nbe placed to the last path anyway, despite to the target size.
\n\nPlacing newer data to earlier paths is also best-efforts. User should\nexpect user files to be placed in higher levels in some extreme cases.
\n\nIf left empty, only one path will be used, which is path
passed when\nopening the DB.
\n\nDefault: empty
\n\nfrom rocksdict import Options, DBPath\n\nopt = Options()\nflash_path = DBPath(\"/flash_path\", 10 * 1024 * 1024 * 1024) # 10 GB\nhard_drive = DBPath(\"/hard_drive\", 2 * 1024 * 1024 * 1024 * 1024) # 2 TB\nopt.set_db_paths([flash_path, hard_drive])\n
\n", "signature": "(self, /, paths):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_env", "modulename": "rocksdict", "qualname": "Options.set_env", "kind": "function", "doc": "Use the specified object to interact with the environment,\ne.g. to read/write files, schedule background work, etc. In the near\nfuture, support for doing storage operations such as read/write files\nthrough env will be deprecated in favor of file_system.
\n", "signature": "(self, /, env):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_type", "modulename": "rocksdict", "qualname": "Options.set_compression_type", "kind": "function", "doc": "Sets the compression algorithm that will be used for compressing blocks.
\n\nDefault: DBCompressionType::Snappy
(DBCompressionType::None
if\nsnappy feature is not enabled).
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, DBCompressionType\n\nopts = Options()\nopts.set_compression_type(DBCompressionType.snappy())\n
\n
\n", "signature": "(self, /, t):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_per_level", "modulename": "rocksdict", "qualname": "Options.set_compression_per_level", "kind": "function", "doc": "Different levels can have different compression policies. There\nare cases where most lower levels would like to use quick compression\nalgorithms while the higher levels (which have more data) use\ncompression algorithms that have better compression but could\nbe slower. This array, if non-empty, should have an entry for\neach level of the database; these override the value specified in\nthe previous field 'compression'.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, DBCompressionType\n\nopts = Options()\nopts.set_compression_per_level([\n DBCompressionType.none(),\n DBCompressionType.none(),\n DBCompressionType.snappy(),\n DBCompressionType.snappy(),\n DBCompressionType.snappy()\n])\n
\n
\n", "signature": "(self, /, level_types):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compression_options", "modulename": "rocksdict", "qualname": "Options.set_compression_options", "kind": "function", "doc": "Maximum size of dictionaries used to prime the compression library.\nEnabling dictionary can improve compression ratios when there are\nrepetitions across data blocks.
\n\nThe dictionary is created by sampling the SST file data. If\nzstd_max_train_bytes
is nonzero, the samples are passed through zstd's\ndictionary generator. Otherwise, the random samples are used directly as\nthe dictionary.
\n\nWhen compression dictionary is disabled, we compress and write each block\nbefore buffering data for the next one. When compression dictionary is\nenabled, we buffer all SST file data in-memory so we can sample it, as data\ncan only be compressed and written after the dictionary has been finalized.\nSo users of this feature may see increased memory usage.
\n\nDefault: 0
\n", "signature": "(self, /, w_bits, level, strategy, max_dict_bytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_zstd_max_train_bytes", "modulename": "rocksdict", "qualname": "Options.set_zstd_max_train_bytes", "kind": "function", "doc": "Sets maximum size of training data passed to zstd's dictionary trainer. Using zstd's\ndictionary trainer can achieve even better compression ratio improvements than using\nmax_dict_bytes
alone.
\n\nThe training data will be used to generate a dictionary of max_dict_bytes.
\n\nDefault: 0.
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compaction_readahead_size", "modulename": "rocksdict", "qualname": "Options.set_compaction_readahead_size", "kind": "function", "doc": "If non-zero, we perform bigger reads when doing compaction. If you're\nrunning RocksDB on spinning disks, you should set this to at least 2MB.\nThat way RocksDB's compaction is doing sequential instead of random reads.
\n\nWhen non-zero, we also force new_table_reader_for_compaction_inputs to\ntrue.
\n\nDefault: 0
\n", "signature": "(self, /, compaction_readahead_size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_compaction_dynamic_level_bytes", "modulename": "rocksdict", "qualname": "Options.set_level_compaction_dynamic_level_bytes", "kind": "function", "doc": "Allow RocksDB to pick dynamic base of bytes for levels.\nWith this feature turned on, RocksDB will automatically adjust max bytes for each level.\nThe goal of this feature is to have lower bound on size amplification.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_prefix_extractor", "modulename": "rocksdict", "qualname": "Options.set_prefix_extractor", "kind": "function", "doc": "\n", "signature": "(self, /, prefix_extractor):", "funcdef": "def"}, {"fullname": "rocksdict.Options.optimize_for_point_lookup", "modulename": "rocksdict", "qualname": "Options.optimize_for_point_lookup", "kind": "function", "doc": "\n", "signature": "(self, /, cache_size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_optimize_filters_for_hits", "modulename": "rocksdict", "qualname": "Options.set_optimize_filters_for_hits", "kind": "function", "doc": "Sets the optimize_filters_for_hits flag
\n\nDefault: false
\n", "signature": "(self, /, optimize_for_hits):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_delete_obsolete_files_period_micros", "modulename": "rocksdict", "qualname": "Options.set_delete_obsolete_files_period_micros", "kind": "function", "doc": "Sets the periodicity when obsolete files get deleted.
\n\nThe files that get out of scope by compaction\nprocess will still get automatically delete on every compaction,\nregardless of this setting.
\n\nDefault: 6 hours
\n", "signature": "(self, /, micros):", "funcdef": "def"}, {"fullname": "rocksdict.Options.prepare_for_bulk_load", "modulename": "rocksdict", "qualname": "Options.prepare_for_bulk_load", "kind": "function", "doc": "Prepare the DB for bulk loading.
\n\nAll data will be in level 0 without any automatic compaction.\nIt's recommended to manually call CompactRange(NULL, NULL) before reading\nfrom the database, because otherwise the read can be very slow.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_open_files", "modulename": "rocksdict", "qualname": "Options.set_max_open_files", "kind": "function", "doc": "Sets the number of open files that can be used by the DB. You may need to\nincrease this if your database has a large working set. Value -1
means\nfiles opened are always kept open. You can estimate number of files based\non target_file_size_base and target_file_size_multiplier for level-based\ncompaction. For universal-style compaction, you can usually set it to -1
.
\n\nDefault: -1
\n", "signature": "(self, /, nfiles):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_file_opening_threads", "modulename": "rocksdict", "qualname": "Options.set_max_file_opening_threads", "kind": "function", "doc": "If max_open_files is -1, DB will open all files on DB::Open(). You can\nuse this option to increase the number of threads used to open the files.\nDefault: 16
\n", "signature": "(self, /, nthreads):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_fsync", "modulename": "rocksdict", "qualname": "Options.set_use_fsync", "kind": "function", "doc": "If true, then every store to stable storage will issue a fsync.\nIf false, then every store to stable storage will issue a fdatasync.\nThis parameter should be set to true while storing data to\nfilesystem like ext3 that can lose files after a reboot.
\n\nDefault: false
\n", "signature": "(self, /, useit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_log_dir", "modulename": "rocksdict", "qualname": "Options.set_db_log_dir", "kind": "function", "doc": "Specifies the absolute info LOG dir.
\n\nIf it is empty, the log files will be in the same dir as data.\nIf it is non empty, the log files will be in the specified dir,\nand the db data dir's absolute path will be used as the log file\nname's prefix.
\n\nDefault: empty
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_bytes_per_sync", "modulename": "rocksdict", "qualname": "Options.set_bytes_per_sync", "kind": "function", "doc": "Allows OS to incrementally sync files to disk while they are being\nwritten, asynchronously, in the background. This operation can be used\nto smooth out write I/Os over time. Users shouldn't rely on it for\npersistency guarantee.\nIssue one request for every bytes_per_sync written. 0
turns it off.
\n\nDefault: 0
\n\nYou may consider using rate_limiter to regulate write rate to device.\nWhen rate limiter is enabled, it automatically enables bytes_per_sync\nto 1MB.
\n\nThis option applies to table files
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_bytes_per_sync", "modulename": "rocksdict", "qualname": "Options.set_wal_bytes_per_sync", "kind": "function", "doc": "Same as bytes_per_sync, but applies to WAL files.
\n\nDefault: 0, turned off
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_writable_file_max_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_writable_file_max_buffer_size", "kind": "function", "doc": "Sets the maximum buffer size that is used by WritableFileWriter.
\n\nOn Windows, we need to maintain an aligned buffer for writes.\nWe allow the buffer to grow until it's size hits the limit in buffered\nIO and fix the buffer size when using direct IO to ensure alignment of\nwrite requests if the logical sector size is unusual
\n\nDefault: 1024 * 1024 (1 MB)
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_concurrent_memtable_write", "modulename": "rocksdict", "qualname": "Options.set_allow_concurrent_memtable_write", "kind": "function", "doc": "If true, allow multi-writers to update mem tables in parallel.\nOnly some memtable_factory-s support concurrent writes; currently it\nis implemented only for SkipListFactory. Concurrent memtable writes\nare not compatible with inplace_update_support or filter_deletes.\nIt is strongly recommended to set enable_write_thread_adaptive_yield\nif you are going to use this feature.
\n\nDefault: true
\n", "signature": "(self, /, allow):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_enable_write_thread_adaptive_yield", "modulename": "rocksdict", "qualname": "Options.set_enable_write_thread_adaptive_yield", "kind": "function", "doc": "If true, threads synchronizing with the write batch group leader will wait for up to\nwrite_thread_max_yield_usec before blocking on a mutex. This can substantially improve\nthroughput for concurrent workloads, regardless of whether allow_concurrent_memtable_write\nis enabled.
\n\nDefault: true
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_sequential_skip_in_iterations", "modulename": "rocksdict", "qualname": "Options.set_max_sequential_skip_in_iterations", "kind": "function", "doc": "Specifies whether an iteration->Next() sequentially skips over keys with the same user-key or not.
\n\nThis number specifies the number of keys (with the same userkey)\nthat will be sequentially skipped before a reseek is issued.
\n\nDefault: 8
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_direct_reads", "modulename": "rocksdict", "qualname": "Options.set_use_direct_reads", "kind": "function", "doc": "Enable direct I/O mode for reading\nthey may or may not improve performance depending on the use case
\n\nFiles will be opened in \"direct I/O\" mode\nwhich means that data read from the disk will not be cached or\nbuffered. The hardware buffer of the devices may however still\nbe used. Memory mapped files are not impacted by these parameters.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_direct_io_for_flush_and_compaction", "modulename": "rocksdict", "qualname": "Options.set_use_direct_io_for_flush_and_compaction", "kind": "function", "doc": "Enable direct I/O mode for flush and compaction
\n\nFiles will be opened in \"direct I/O\" mode\nwhich means that data written to the disk will not be cached or\nbuffered. The hardware buffer of the devices may however still\nbe used. Memory mapped files are not impacted by these parameters.\nthey may or may not improve performance depending on the use case
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_is_fd_close_on_exec", "modulename": "rocksdict", "qualname": "Options.set_is_fd_close_on_exec", "kind": "function", "doc": "Enable/dsiable child process inherit open files.
\n\nDefault: true
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_table_cache_num_shard_bits", "modulename": "rocksdict", "qualname": "Options.set_table_cache_num_shard_bits", "kind": "function", "doc": "Sets the number of shards used for table cache.
\n\nDefault: 6
\n", "signature": "(self, /, nbits):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_target_file_size_multiplier", "modulename": "rocksdict", "qualname": "Options.set_target_file_size_multiplier", "kind": "function", "doc": "By default target_file_size_multiplier is 1, which means\nby default files in different levels will have similar size.
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, multiplier):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_write_buffer_number", "modulename": "rocksdict", "qualname": "Options.set_min_write_buffer_number", "kind": "function", "doc": "Sets the minimum number of write buffers that will be merged together\nbefore writing to storage. If set to 1
, then\nall write buffers are flushed to L0 as individual files and this increases\nread amplification because a get request has to check in all of these\nfiles. Also, an in-memory merge may result in writing lesser\ndata to storage if there are duplicate records in each of these\nindividual write buffers.
\n\nDefault: 1
\n", "signature": "(self, /, nbuf):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_write_buffer_number", "modulename": "rocksdict", "qualname": "Options.set_max_write_buffer_number", "kind": "function", "doc": "Sets the maximum number of write buffers that are built up in memory.\nThe default and the minimum number is 2, so that when 1 write buffer\nis being flushed to storage, new writes can continue to the other\nwrite buffer.\nIf max_write_buffer_number > 3, writing will be slowed down to\noptions.delayed_write_rate if we are writing to the last write buffer\nallowed.
\n\nDefault: 2
\n", "signature": "(self, /, nbuf):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_write_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_write_buffer_size", "kind": "function", "doc": "Sets the amount of data to build up in memory (backed by an unsorted log\non disk) before converting to a sorted on-disk file.
\n\nLarger values increase performance, especially during bulk loads.\nUp to max_write_buffer_number write buffers may be held in memory\nat the same time,\nso you may wish to adjust this parameter to control memory usage.\nAlso, a larger write buffer will result in a longer recovery time\nthe next time the database is opened.
\n\nNote that write_buffer_size is enforced per column family.\nSee db_write_buffer_size for sharing memory across column families.
\n\nDefault: 0x4000000
(64MiB)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_db_write_buffer_size", "modulename": "rocksdict", "qualname": "Options.set_db_write_buffer_size", "kind": "function", "doc": "Amount of data to build up in memtables across all column\nfamilies before writing to disk.
\n\nThis is distinct from write_buffer_size, which enforces a limit\nfor a single memtable.
\n\nThis feature is disabled by default. Specify a non-zero value\nto enable it.
\n\nDefault: 0 (disabled)
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_base", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_base", "kind": "function", "doc": "Control maximum total data size for a level.\nmax_bytes_for_level_base is the max total for level-1.\nMaximum number of bytes for level L can be calculated as\n(max_bytes_for_level_base) * (max_bytes_for_level_multiplier ^ (L-1))\nFor example, if max_bytes_for_level_base is 200MB, and if\nmax_bytes_for_level_multiplier is 10, total data size for level-1\nwill be 200MB, total file size for level-2 will be 2GB,\nand total file size for level-3 will be 20GB.
\n\nDefault: 0x10000000
(256MiB).
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_multiplier", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_multiplier", "kind": "function", "doc": "Default: 10
\n", "signature": "(self, /, mul):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_manifest_file_size", "modulename": "rocksdict", "qualname": "Options.set_max_manifest_file_size", "kind": "function", "doc": "The manifest file is rolled over on reaching this limit.\nThe older manifest file be deleted.\nThe default value is MAX_INT so that roll-over does not take place.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_target_file_size_base", "modulename": "rocksdict", "qualname": "Options.set_target_file_size_base", "kind": "function", "doc": "Sets the target file size for compaction.\ntarget_file_size_base is per-file size for level-1.\nTarget file size for level L can be calculated by\ntarget_file_size_base * (target_file_size_multiplier ^ (L-1))\nFor example, if target_file_size_base is 2MB and\ntarget_file_size_multiplier is 10, then each file on level-1 will\nbe 2MB, and each file on level 2 will be 20MB,\nand each file on level-3 will be 200MB.
\n\nDefault: 0x4000000
(64MiB)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_write_buffer_number_to_merge", "modulename": "rocksdict", "qualname": "Options.set_min_write_buffer_number_to_merge", "kind": "function", "doc": "Sets the minimum number of write buffers that will be merged together\nbefore writing to storage. If set to 1
, then\nall write buffers are flushed to L0 as individual files and this increases\nread amplification because a get request has to check in all of these\nfiles. Also, an in-memory merge may result in writing lesser\ndata to storage if there are duplicate records in each of these\nindividual write buffers.
\n\nDefault: 1
\n", "signature": "(self, /, to_merge):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_file_num_compaction_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_file_num_compaction_trigger", "kind": "function", "doc": "Sets the number of files to trigger level-0 compaction. A value < 0
means that\nlevel-0 compaction will not be triggered by number of files at all.
\n\nDefault: 4
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_slowdown_writes_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_slowdown_writes_trigger", "kind": "function", "doc": "Sets the soft limit on number of level-0 files. We start slowing down writes at this\npoint. A value < 0
means that no writing slow down will be triggered by\nnumber of files in level-0.
\n\nDefault: 20
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_level_zero_stop_writes_trigger", "modulename": "rocksdict", "qualname": "Options.set_level_zero_stop_writes_trigger", "kind": "function", "doc": "Sets the maximum number of level-0 files. We stop writes at this point.
\n\nDefault: 24
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_compaction_style", "modulename": "rocksdict", "qualname": "Options.set_compaction_style", "kind": "function", "doc": "Sets the compaction style.
\n\nDefault: DBCompactionStyle.level()
\n", "signature": "(self, /, style):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_universal_compaction_options", "modulename": "rocksdict", "qualname": "Options.set_universal_compaction_options", "kind": "function", "doc": "Sets the options needed to support Universal Style compactions.
\n", "signature": "(self, /, uco):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_fifo_compaction_options", "modulename": "rocksdict", "qualname": "Options.set_fifo_compaction_options", "kind": "function", "doc": "Sets the options for FIFO compaction style.
\n", "signature": "(self, /, fco):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_unordered_write", "modulename": "rocksdict", "qualname": "Options.set_unordered_write", "kind": "function", "doc": "Sets unordered_write to true trades higher write throughput with\nrelaxing the immutability guarantee of snapshots. This violates the\nrepeatability one expects from ::Get from a snapshot, as well as\n:MultiGet and Iterator's consistent-point-in-time view property.\nIf the application cannot tolerate the relaxed guarantees, it can implement\nits own mechanisms to work around that and yet benefit from the higher\nthroughput. Using TransactionDB with WRITE_PREPARED write policy and\ntwo_write_queues=true is one way to achieve immutable snapshots despite\nunordered_write.
\n\nBy default, i.e., when it is false, rocksdb does not advance the sequence\nnumber for new snapshots unless all the writes with lower sequence numbers\nare already finished. This provides the immutability that we except from\nsnapshots. Moreover, since Iterator and MultiGet internally depend on\nsnapshots, the snapshot immutability results into Iterator and MultiGet\noffering consistent-point-in-time view. If set to true, although\nRead-Your-Own-Write property is still provided, the snapshot immutability\nproperty is relaxed: the writes issued after the snapshot is obtained (with\nlarger sequence numbers) will be still not visible to the reads from that\nsnapshot, however, there still might be pending writes (with lower sequence\nnumber) that will change the state visible to the snapshot after they are\nlanded to the memtable.
\n\nDefault: false
\n", "signature": "(self, /, unordered):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_subcompactions", "modulename": "rocksdict", "qualname": "Options.set_max_subcompactions", "kind": "function", "doc": "Sets maximum number of threads that will\nconcurrently perform a compaction job by breaking it into multiple,\nsmaller ones that are run simultaneously.
\n\nDefault: 1 (i.e. no subcompactions)
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_background_jobs", "modulename": "rocksdict", "qualname": "Options.set_max_background_jobs", "kind": "function", "doc": "Sets maximum number of concurrent background jobs\n(compactions and flushes).
\n\nDefault: 2
\n\nDynamically changeable through SetDBOptions() API.
\n", "signature": "(self, /, jobs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_disable_auto_compactions", "modulename": "rocksdict", "qualname": "Options.set_disable_auto_compactions", "kind": "function", "doc": "Disables automatic compactions. Manual compactions can still\nbe issued on this column family
\n\nDefault: false
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, disable):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_huge_page_size", "modulename": "rocksdict", "qualname": "Options.set_memtable_huge_page_size", "kind": "function", "doc": "SetMemtableHugePageSize sets the page size for huge page for\narena used by the memtable.\nIf <=0, it won't allocate from huge page but from malloc.\nUsers are responsible to reserve huge pages for it to be allocated. For\nexample:\n sysctl -w vm.nr_hugepages=20\nSee linux doc Documentation/vm/hugetlbpage.txt\nIf there isn't enough free huge page available, it will fall back to\nmalloc.
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_successive_merges", "modulename": "rocksdict", "qualname": "Options.set_max_successive_merges", "kind": "function", "doc": "Sets the maximum number of successive merge operations on a key in the memtable.
\n\nWhen a merge operation is added to the memtable and the maximum number of\nsuccessive merges is reached, the value of the key will be calculated and\ninserted into the memtable instead of the merge operation. This will\nensure that there are never more than max_successive_merges merge\noperations in the memtable.
\n\nDefault: 0 (disabled)
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_bloom_locality", "modulename": "rocksdict", "qualname": "Options.set_bloom_locality", "kind": "function", "doc": "Control locality of bloom filter probes to improve cache miss rate.\nThis option only applies to memtable prefix bloom and plaintable\nprefix bloom. It essentially limits the max number of cache lines each\nbloom filter check can touch.
\n\nThis optimization is turned off when set to 0. The number should never\nbe greater than number of probes. This option can boost performance\nfor in-memory workload but should use with care since it can cause\nhigher false positive rate.
\n\nDefault: 0
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_inplace_update_support", "modulename": "rocksdict", "qualname": "Options.set_inplace_update_support", "kind": "function", "doc": "Enable/disable thread-safe inplace updates.
\n\nRequires updates if
\n\n\n- key exists in current memtable
\n- new sizeof(new_value) <= sizeof(old_value)
\n- old_value for that key is a put i.e. kTypeValue
\n
\n\nDefault: false.
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_inplace_update_locks", "modulename": "rocksdict", "qualname": "Options.set_inplace_update_locks", "kind": "function", "doc": "Sets the number of locks used for inplace update.
\n\nDefault: 10000 when inplace_update_support = true, otherwise 0.
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_bytes_for_level_multiplier_additional", "modulename": "rocksdict", "qualname": "Options.set_max_bytes_for_level_multiplier_additional", "kind": "function", "doc": "Different max-size multipliers for different levels.\nThese are multiplied by max_bytes_for_level_multiplier to arrive\nat the max-size of each level.
\n\nDefault: 1
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, level_values):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_skip_checking_sst_file_sizes_on_db_open", "modulename": "rocksdict", "qualname": "Options.set_skip_checking_sst_file_sizes_on_db_open", "kind": "function", "doc": "If true, then DB::Open() will not fetch and check sizes of all sst files.\nThis may significantly speed up startup if there are many sst files,\nespecially when using non-default Env with expensive GetFileSize().\nWe'll still check that all required sst files exist.\nIf paranoid_checks is false, this option is ignored, and sst files are\nnot checked at all.
\n\nDefault: false
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_write_buffer_size_to_maintain", "modulename": "rocksdict", "qualname": "Options.set_max_write_buffer_size_to_maintain", "kind": "function", "doc": "The total maximum size(bytes) of write buffers to maintain in memory\nincluding copies of buffers that have already been flushed. This parameter\nonly affects trimming of flushed buffers and does not affect flushing.\nThis controls the maximum amount of write history that will be available\nin memory for conflict checking when Transactions are used. The actual\nsize of write history (flushed Memtables) might be higher than this limit\nif further trimming will reduce write history total size below this\nlimit. For example, if max_write_buffer_size_to_maintain is set to 64MB,\nand there are three flushed Memtables, with sizes of 32MB, 20MB, 20MB.\nBecause trimming the next Memtable of size 20MB will reduce total memory\nusage to 52MB which is below the limit, RocksDB will stop trimming.
\n\nWhen using an OptimisticTransactionDB:\nIf this value is too low, some transactions may fail at commit time due\nto not being able to determine whether there were any write conflicts.
\n\nWhen using a TransactionDB:\nIf Transaction::SetSnapshot is used, TransactionDB will read either\nin-memory write buffers or SST files to do write-conflict checking.\nIncreasing this value can reduce the number of reads to SST files\ndone for conflict detection.
\n\nSetting this value to 0 will cause write buffers to be freed immediately\nafter they are flushed. If this value is set to -1,\n'max_write_buffer_number * write_buffer_size' will be used.
\n\nDefault:\nIf using a TransactionDB/OptimisticTransactionDB, the default value will\nbe set to the value of 'max_write_buffer_number * write_buffer_size'\nif it is not explicitly set by the user. Otherwise, the default is 0.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_enable_pipelined_write", "modulename": "rocksdict", "qualname": "Options.set_enable_pipelined_write", "kind": "function", "doc": "By default, a single write thread queue is maintained. The thread gets\nto the head of the queue becomes write batch group leader and responsible\nfor writing to WAL and memtable for the batch group.
\n\nIf enable_pipelined_write is true, separate write thread queue is\nmaintained for WAL write and memtable write. A write thread first enter WAL\nwriter queue and then memtable writer queue. Pending thread on the WAL\nwriter queue thus only have to wait for previous writers to finish their\nWAL writing but not the memtable writing. Enabling the feature may improve\nwrite throughput and reduce latency of the prepare phase of two-phase\ncommit.
\n\nDefault: false
\n", "signature": "(self, /, value):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_factory", "modulename": "rocksdict", "qualname": "Options.set_memtable_factory", "kind": "function", "doc": "Defines the underlying memtable implementation.\nSee official wiki for more information.\nDefaults to using a skiplist.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, MemtableFactory\nopts = Options()\nfactory = MemtableFactory.hash_skip_list(bucket_count=1_000_000,\n height=4,\n branching_factor=4)\n\nopts.set_allow_concurrent_memtable_write(false)\nopts.set_memtable_factory(factory)\n
\n
\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_block_based_table_factory", "modulename": "rocksdict", "qualname": "Options.set_block_based_table_factory", "kind": "function", "doc": "\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_cuckoo_table_factory", "modulename": "rocksdict", "qualname": "Options.set_cuckoo_table_factory", "kind": "function", "doc": "Sets the table factory to a CuckooTableFactory (the default table\nfactory is a block-based table factory that provides a default\nimplementation of TableBuilder and TableReader with default\nBlockBasedTableOptions).\nSee official wiki for more information on this table format.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, CuckooTableOptions\n\nopts = Options()\nfactory_opts = CuckooTableOptions()\nfactory_opts.set_hash_ratio(0.8)\nfactory_opts.set_max_search_depth(20)\nfactory_opts.set_cuckoo_block_size(10)\nfactory_opts.set_identity_as_first_hash(true)\nfactory_opts.set_use_module_hash(false)\n\nopts.set_cuckoo_table_factory(factory_opts)\n
\n
\n", "signature": "(self, /, factory):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_plain_table_factory", "modulename": "rocksdict", "qualname": "Options.set_plain_table_factory", "kind": "function", "doc": "This is a factory that provides TableFactory objects.\nDefault: a block-based table factory that provides a default\nimplementation of TableBuilder and TableReader with default\nBlockBasedTableOptions.\nSets the factory as plain table.\nSee official wiki for more\ninformation.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options, PlainTableFactoryOptions\n\nopts = Options()\nfactory_opts = PlainTableFactoryOptions()\nfactory_opts.user_key_length = 0\nfactory_opts.bloom_bits_per_key = 20\nfactory_opts.hash_table_ratio = 0.75\nfactory_opts.index_sparseness = 16\n\nopts.set_plain_table_factory(factory_opts)\n
\n
\n", "signature": "(self, /, options):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_min_level_to_compress", "modulename": "rocksdict", "qualname": "Options.set_min_level_to_compress", "kind": "function", "doc": "Sets the start level to use compression.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_report_bg_io_stats", "modulename": "rocksdict", "qualname": "Options.set_report_bg_io_stats", "kind": "function", "doc": "Measure IO stats in compactions and flushes, if true
.
\n\nDefault: false
\n", "signature": "(self, /, enable):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_total_wal_size", "modulename": "rocksdict", "qualname": "Options.set_max_total_wal_size", "kind": "function", "doc": "Once write-ahead logs exceed this size, we will start forcing the flush of\ncolumn families whose memtables are backed by the oldest live WAL file\n(i.e. the ones that are causing all the space amplification).
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_recovery_mode", "modulename": "rocksdict", "qualname": "Options.set_wal_recovery_mode", "kind": "function", "doc": "Recovery mode to control the consistency while replaying WAL.
\n\nDefault: DBRecoveryMode::PointInTime
\n", "signature": "(self, /, mode):", "funcdef": "def"}, {"fullname": "rocksdict.Options.enable_statistics", "modulename": "rocksdict", "qualname": "Options.enable_statistics", "kind": "function", "doc": "\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.get_statistics", "modulename": "rocksdict", "qualname": "Options.get_statistics", "kind": "function", "doc": "\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_stats_dump_period_sec", "modulename": "rocksdict", "qualname": "Options.set_stats_dump_period_sec", "kind": "function", "doc": "If not zero, dump rocksdb.stats
to LOG every stats_dump_period_sec
.
\n\nDefault: 600
(10 mins)
\n", "signature": "(self, /, period):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_stats_persist_period_sec", "modulename": "rocksdict", "qualname": "Options.set_stats_persist_period_sec", "kind": "function", "doc": "If not zero, dump rocksdb.stats to RocksDB to LOG every stats_persist_period_sec
.
\n\nDefault: 600
(10 mins)
\n", "signature": "(self, /, period):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_advise_random_on_open", "modulename": "rocksdict", "qualname": "Options.set_advise_random_on_open", "kind": "function", "doc": "When set to true, reading SST files will opt out of the filesystem's\nreadahead. Setting this to false may improve sequential iteration\nperformance.
\n\nDefault: true
\n", "signature": "(self, /, advise):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_use_adaptive_mutex", "modulename": "rocksdict", "qualname": "Options.set_use_adaptive_mutex", "kind": "function", "doc": "Enable/disable adaptive mutex, which spins in the user space before resorting to kernel.
\n\nThis could reduce context switch when the mutex is not\nheavily contended. However, if the mutex is hot, we could end up\nwasting spin time.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_num_levels", "modulename": "rocksdict", "qualname": "Options.set_num_levels", "kind": "function", "doc": "Sets the number of levels for this database.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_prefix_bloom_ratio", "modulename": "rocksdict", "qualname": "Options.set_memtable_prefix_bloom_ratio", "kind": "function", "doc": "When a prefix_extractor
is defined through opts.set_prefix_extractor
this\ncreates a prefix bloom filter for each memtable with the size of\nwrite_buffer_size * memtable_prefix_bloom_ratio
(capped at 0.25).
\n\nDefault: 0
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_compaction_bytes", "modulename": "rocksdict", "qualname": "Options.set_max_compaction_bytes", "kind": "function", "doc": "Sets the maximum number of bytes in all compacted files.\nWe try to limit number of bytes in one compaction to be lower than this\nthreshold. But it's not guaranteed.
\n\nValue 0 will be sanitized.
\n\nDefault: target_file_size_base * 25
\n", "signature": "(self, /, nbytes):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_dir", "modulename": "rocksdict", "qualname": "Options.set_wal_dir", "kind": "function", "doc": "Specifies the absolute path of the directory the\nwrite-ahead log (WAL) should be written to.
\n\nDefault: same directory as the database
\n", "signature": "(self, /, path):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_ttl_seconds", "modulename": "rocksdict", "qualname": "Options.set_wal_ttl_seconds", "kind": "function", "doc": "Sets the WAL ttl in seconds.
\n\nThe following two options affect how archived logs will be deleted.
\n\n\n- If both set to 0, logs will be deleted asap and will not get into\nthe archive.
\n- If wal_ttl_seconds is 0 and wal_size_limit_mb is not 0,\nWAL files will be checked every 10 min and if total size is greater\nthen wal_size_limit_mb, they will be deleted starting with the\nearliest until size_limit is met. All empty files will be deleted.
\n- If wal_ttl_seconds is not 0 and wall_size_limit_mb is 0, then\nWAL files will be checked every wal_ttl_seconds / 2 and those that\nare older than wal_ttl_seconds will be deleted.
\n- If both are not 0, WAL files will be checked every 10 min and both\nchecks will be performed with ttl being first.
\n
\n\nDefault: 0
\n", "signature": "(self, /, secs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_wal_size_limit_mb", "modulename": "rocksdict", "qualname": "Options.set_wal_size_limit_mb", "kind": "function", "doc": "Sets the WAL size limit in MB.
\n\nIf total size of WAL files is greater then wal_size_limit_mb,\nthey will be deleted starting with the earliest until size_limit is met.
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_manifest_preallocation_size", "modulename": "rocksdict", "qualname": "Options.set_manifest_preallocation_size", "kind": "function", "doc": "Sets the number of bytes to preallocate (via fallocate) the manifest files.
\n\nDefault is 4MB, which is reasonable to reduce random IO\nas well as prevent overallocation for mounts that preallocate\nlarge amounts of data (such as xfs's allocsize option).
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_skip_stats_update_on_db_open", "modulename": "rocksdict", "qualname": "Options.set_skip_stats_update_on_db_open", "kind": "function", "doc": "If true, then DB::Open() will not update the statistics used to optimize\ncompaction decision by loading table properties from many files.\nTurning off this feature will improve DBOpen time especially in disk environment.
\n\nDefault: false
\n", "signature": "(self, /, skip):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_keep_log_file_num", "modulename": "rocksdict", "qualname": "Options.set_keep_log_file_num", "kind": "function", "doc": "Specify the maximal number of info log files to be kept.
\n\nDefault: 1000
\n", "signature": "(self, /, nfiles):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_mmap_writes", "modulename": "rocksdict", "qualname": "Options.set_allow_mmap_writes", "kind": "function", "doc": "Allow the OS to mmap file for writing.
\n\nDefault: false
\n", "signature": "(self, /, is_enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_allow_mmap_reads", "modulename": "rocksdict", "qualname": "Options.set_allow_mmap_reads", "kind": "function", "doc": "Allow the OS to mmap file for reading sst tables.
\n\nDefault: false
\n", "signature": "(self, /, is_enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_atomic_flush", "modulename": "rocksdict", "qualname": "Options.set_atomic_flush", "kind": "function", "doc": "Guarantee that all column families are flushed together atomically.\nThis option applies to both manual flushes (db.flush()
) and automatic\nbackground flushes caused when memtables are filled.
\n\nNote that this is only useful when the WAL is disabled. When using the\nWAL, writes are always consistent across column families.
\n\nDefault: false
\n", "signature": "(self, /, atomic_flush):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_row_cache", "modulename": "rocksdict", "qualname": "Options.set_row_cache", "kind": "function", "doc": "Sets global cache for table-level rows. Cache must outlive DB instance which uses it.
\n\nDefault: null (disabled)\nNot supported in ROCKSDB_LITE mode!
\n", "signature": "(self, /, cache):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_ratelimiter", "modulename": "rocksdict", "qualname": "Options.set_ratelimiter", "kind": "function", "doc": "Use to control write rate of flush and compaction. Flush has higher\npriority than compaction.\nIf rate limiter is enabled, bytes_per_sync is set to 1MB by default.
\n\nDefault: disable
\n", "signature": "(self, /, rate_bytes_per_sec, refill_period_us, fairness):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_max_log_file_size", "modulename": "rocksdict", "qualname": "Options.set_max_log_file_size", "kind": "function", "doc": "Sets the maximal size of the info log file.
\n\nIf the log file is larger than max_log_file_size
, a new info log file\nwill be created. If max_log_file_size
is equal to zero, all logs will\nbe written to one log file.
\n\nDefault: 0
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options\n\noptions = Options()\noptions.set_max_log_file_size(0)\n
\n
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_log_file_time_to_roll", "modulename": "rocksdict", "qualname": "Options.set_log_file_time_to_roll", "kind": "function", "doc": "Sets the time for the info log file to roll (in seconds).
\n\nIf specified with non-zero value, log file will be rolled\nif it has been active longer than log_file_time_to_roll
.\nDefault: 0 (disabled)
\n", "signature": "(self, /, secs):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_recycle_log_file_num", "modulename": "rocksdict", "qualname": "Options.set_recycle_log_file_num", "kind": "function", "doc": "Controls the recycling of log files.
\n\nIf non-zero, previously written log files will be reused for new logs,\noverwriting the old data. The value indicates how many such files we will\nkeep around at any point in time for later use. This is more efficient\nbecause the blocks are already allocated and fdatasync does not need to\nupdate the inode after each write.
\n\nDefault: 0
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import Options\n\noptions = Options()\noptions.set_recycle_log_file_num(5)\n
\n
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_soft_pending_compaction_bytes_limit", "modulename": "rocksdict", "qualname": "Options.set_soft_pending_compaction_bytes_limit", "kind": "function", "doc": "Sets the threshold at which all writes will be slowed down to at least delayed_write_rate if estimated\nbytes needed to be compaction exceed this threshold.
\n\nDefault: 64GB
\n", "signature": "(self, /, limit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_hard_pending_compaction_bytes_limit", "modulename": "rocksdict", "qualname": "Options.set_hard_pending_compaction_bytes_limit", "kind": "function", "doc": "Sets the bytes threshold at which all writes are stopped if estimated bytes needed to be compaction exceed\nthis threshold.
\n\nDefault: 256GB
\n", "signature": "(self, /, limit):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_arena_block_size", "modulename": "rocksdict", "qualname": "Options.set_arena_block_size", "kind": "function", "doc": "Sets the size of one block in arena memory allocation.
\n\nIf <= 0, a proper value is automatically calculated (usually 1/10 of\nwriter_buffer_size).
\n\nDefault: 0
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_dump_malloc_stats", "modulename": "rocksdict", "qualname": "Options.set_dump_malloc_stats", "kind": "function", "doc": "If true, then print malloc stats together with rocksdb.stats when printing to LOG.
\n\nDefault: false
\n", "signature": "(self, /, enabled):", "funcdef": "def"}, {"fullname": "rocksdict.Options.set_memtable_whole_key_filtering", "modulename": "rocksdict", "qualname": "Options.set_memtable_whole_key_filtering", "kind": "function", "doc": "Enable whole key bloom filter in memtable. Note this will only take effect\nif memtable_prefix_bloom_size_ratio is not 0. Enabling whole key filtering\ncan potentially reduce CPU usage for point-look-ups.
\n\nDefault: false (disable)
\n\nDynamically changeable through SetOptions() API
\n", "signature": "(self, /, whole_key_filter):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions", "modulename": "rocksdict", "qualname": "ReadOptions", "kind": "class", "doc": "ReadOptions allows setting iterator bounds and so on.
\n\nArguments:
\n\n\n- raw_mode (bool): this must be the same as
Options
raw_mode\nargument. \n
\n"}, {"fullname": "rocksdict.ReadOptions.fill_cache", "modulename": "rocksdict", "qualname": "ReadOptions.fill_cache", "kind": "function", "doc": "Specify whether the \"data block\"/\"index block\"/\"filter block\"\nread for this iteration should be cached in memory?\nCallers may wish to set this field to false for bulk scans.
\n\nDefault: true
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_iterate_upper_bound", "modulename": "rocksdict", "qualname": "ReadOptions.set_iterate_upper_bound", "kind": "function", "doc": "Sets the upper bound for an iterator.
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_iterate_lower_bound", "modulename": "rocksdict", "qualname": "ReadOptions.set_iterate_lower_bound", "kind": "function", "doc": "Sets the lower bound for an iterator.
\n", "signature": "(self, /, key):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_prefix_same_as_start", "modulename": "rocksdict", "qualname": "ReadOptions.set_prefix_same_as_start", "kind": "function", "doc": "Enforce that the iterator only iterates over the same\nprefix as the seek.\nThis option is effective only for prefix seeks, i.e. prefix_extractor is\nnon-null for the column family and total_order_seek is false. Unlike\niterate_upper_bound, prefix_same_as_start only works within a prefix\nbut in both directions.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_total_order_seek", "modulename": "rocksdict", "qualname": "ReadOptions.set_total_order_seek", "kind": "function", "doc": "Enable a total order seek regardless of index format (e.g. hash index)\nused in the table. Some table format (e.g. plain table) may not support\nthis option.
\n\nIf true when calling Get(), we also skip prefix bloom when reading from\nblock based table. It provides a way to read existing data after\nchanging implementation of prefix extractor.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_max_skippable_internal_keys", "modulename": "rocksdict", "qualname": "ReadOptions.set_max_skippable_internal_keys", "kind": "function", "doc": "Sets a threshold for the number of keys that can be skipped\nbefore failing an iterator seek as incomplete. The default value of 0 should be used to\nnever fail a request as incomplete, even on skipping too many keys.
\n\nDefault: 0
\n", "signature": "(self, /, num):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_background_purge_on_iterator_cleanup", "modulename": "rocksdict", "qualname": "ReadOptions.set_background_purge_on_iterator_cleanup", "kind": "function", "doc": "If true, when PurgeObsoleteFile is called in CleanupIteratorState, we schedule a background job\nin the flush job queue and delete obsolete files in background.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_ignore_range_deletions", "modulename": "rocksdict", "qualname": "ReadOptions.set_ignore_range_deletions", "kind": "function", "doc": "If true, keys deleted using the DeleteRange() API will be visible to\nreaders until they are naturally deleted during compaction. This improves\nread performance in DBs with many range deletions.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_verify_checksums", "modulename": "rocksdict", "qualname": "ReadOptions.set_verify_checksums", "kind": "function", "doc": "If true, all data read from underlying storage will be\nverified against corresponding checksums.
\n\nDefault: true
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_readahead_size", "modulename": "rocksdict", "qualname": "ReadOptions.set_readahead_size", "kind": "function", "doc": "If non-zero, an iterator will create a new table reader which\nperforms reads of the given size. Using a large size (> 2MB) can\nimprove the performance of forward iteration on spinning disks.\nDefault: 0
\n\nfrom rocksdict import ReadOptions
\n\nopts = ReadOptions()\nopts.set_readahead_size(4_194_304) # 4mb
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_tailing", "modulename": "rocksdict", "qualname": "ReadOptions.set_tailing", "kind": "function", "doc": "If true, create a tailing iterator. Note that tailing iterators\nonly support moving in the forward direction. Iterating in reverse\nor seek_to_last are not supported.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_pin_data", "modulename": "rocksdict", "qualname": "ReadOptions.set_pin_data", "kind": "function", "doc": "Specifies the value of \"pin_data\". If true, it keeps the blocks\nloaded by the iterator pinned in memory as long as the iterator is not deleted,\nIf used when reading from tables created with\nBlockBasedTableOptions::use_delta_encoding = false,\nIterator's property \"rocksdb.iterator.is-key-pinned\" is guaranteed to\nreturn 1.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ReadOptions.set_async_io", "modulename": "rocksdict", "qualname": "ReadOptions.set_async_io", "kind": "function", "doc": "Asynchronously prefetch some data.
\n\nUsed for sequential reads and internal automatic prefetching.
\n\nDefault: false
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.ColumnFamily", "modulename": "rocksdict", "qualname": "ColumnFamily", "kind": "class", "doc": "Column family handle. This can be used in WriteBatch to specify Column Family.
\n"}, {"fullname": "rocksdict.IngestExternalFileOptions", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_move_files", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_move_files", "kind": "function", "doc": "Can be set to true to move the files instead of copying them.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_snapshot_consistency", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_snapshot_consistency", "kind": "function", "doc": "If set to false, an ingested file keys could appear in existing snapshots\nthat where created before the file was ingested.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_allow_global_seqno", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_allow_global_seqno", "kind": "function", "doc": "If set to false, IngestExternalFile() will fail if the file key range\noverlaps with existing keys or tombstones in the DB.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_allow_blocking_flush", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_allow_blocking_flush", "kind": "function", "doc": "If set to false and the file key range overlaps with the memtable key range\n(memtable flush required), IngestExternalFile will fail.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.IngestExternalFileOptions.set_ingest_behind", "modulename": "rocksdict", "qualname": "IngestExternalFileOptions.set_ingest_behind", "kind": "function", "doc": "Set to true if you would like duplicate keys in the file being ingested\nto be skipped rather than overwriting existing data under that key.\nUsecase: back-fill of some historical data in the database without\nover-writing existing newer version of data.\nThis option could only be used if the DB has been running\nwith allow_ingest_behind=true since the dawn of time.\nAll files will be ingested at the bottommost level with seqno=0.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.DBPath", "modulename": "rocksdict", "qualname": "DBPath", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.MemtableFactory", "modulename": "rocksdict", "qualname": "MemtableFactory", "kind": "class", "doc": "Defines the underlying memtable implementation.\nSee official wiki for more information.
\n"}, {"fullname": "rocksdict.MemtableFactory.vector", "modulename": "rocksdict", "qualname": "MemtableFactory.vector", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.MemtableFactory.hash_skip_list", "modulename": "rocksdict", "qualname": "MemtableFactory.hash_skip_list", "kind": "function", "doc": "\n", "signature": "(bucket_count, height, branching_factor):", "funcdef": "def"}, {"fullname": "rocksdict.MemtableFactory.hash_link_list", "modulename": "rocksdict", "qualname": "MemtableFactory.hash_link_list", "kind": "function", "doc": "\n", "signature": "(bucket_count):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions", "modulename": "rocksdict", "qualname": "BlockBasedOptions", "kind": "class", "doc": "For configuring block-based file storage.
\n"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_size", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_size", "kind": "function", "doc": "Approximate size of user data packed per block. Note that the\nblock size specified here corresponds to uncompressed data. The\nactual size of the unit read from disk may be smaller if\ncompression is enabled. This parameter can be changed dynamically.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_metadata_block_size", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_metadata_block_size", "kind": "function", "doc": "Block size for partitioned metadata. Currently applied to indexes when\nkTwoLevelIndexSearch is used and to filters when partition_filters is used.\nNote: Since in the current implementation the filters and index partitions\nare aligned, an index/filter block is created when either index or filter\nblock size reaches the specified limit.
\n\nNote: this limit is currently applied to only index blocks; a filter\npartition is cut right after an index block is cut.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_partition_filters", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_partition_filters", "kind": "function", "doc": "Note: currently this option requires kTwoLevelIndexSearch to be set as\nwell.
\n\nUse partitioned full filters for each SST file. This option is\nincompatible with block-based filters.
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_cache", "kind": "function", "doc": "Sets global cache for blocks (user data is stored in a set of blocks, and\na block is the unit of reading from disk). Cache must outlive DB instance which uses it.
\n\nIf set, use the specified cache for blocks.\nBy default, rocksdb will automatically create and use an 8MB internal cache.
\n", "signature": "(self, /, cache):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.disable_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.disable_cache", "kind": "function", "doc": "Disable block cache
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_bloom_filter", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_bloom_filter", "kind": "function", "doc": "Sets the filter policy to reduce disk read
\n", "signature": "(self, /, bits_per_key, block_based):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_cache_index_and_filter_blocks", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_cache_index_and_filter_blocks", "kind": "function", "doc": "\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_index_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_index_type", "kind": "function", "doc": "Defines the index type to be used for SS-table lookups.
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import BlockBasedOptions, BlockBasedIndexType, Options\n\nopts = Options()\nblock_opts = BlockBasedOptions()\nblock_opts.set_index_type(BlockBasedIndexType.hash_search())\nopts.set_block_based_table_factory(block_opts)\n
\n
\n", "signature": "(self, /, index_type):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_pin_l0_filter_and_index_blocks_in_cache", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_pin_l0_filter_and_index_blocks_in_cache", "kind": "function", "doc": "If cache_index_and_filter_blocks is true and the below is true, then\nfilter and index blocks are stored in the cache, but a reference is\nheld in the \"table reader\" object so the blocks are pinned and only\nevicted from cache when the table reader is freed.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_pin_top_level_index_and_filter", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_pin_top_level_index_and_filter", "kind": "function", "doc": "If cache_index_and_filter_blocks is true and the below is true, then\nthe top-level index of partitioned filter and index blocks are stored in\nthe cache, but a reference is held in the \"table reader\" object so the\nblocks are pinned and only evicted from cache when the table reader is\nfreed. This is not limited to l0 in LSM tree.
\n\nDefault: false.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_format_version", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_format_version", "kind": "function", "doc": "Format version, reserved for backward compatibility.
\n\nSee full list\nof the supported versions.
\n\nDefault: 2.
\n", "signature": "(self, /, version):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_block_restart_interval", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_block_restart_interval", "kind": "function", "doc": "Number of keys between restart points for delta encoding of keys.\nThis parameter can be changed dynamically. Most clients should\nleave this parameter alone. The minimum value allowed is 1. Any smaller\nvalue will be silently overwritten with 1.
\n\nDefault: 16.
\n", "signature": "(self, /, interval):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_index_block_restart_interval", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_index_block_restart_interval", "kind": "function", "doc": "Same as block_restart_interval but used for the index block.\nIf you don't plan to run RocksDB before version 5.16 and you are\nusing index_block_restart_interval
> 1, you should\nprobably set the format_version
to >= 4 as it would reduce the index size.
\n\nDefault: 1.
\n", "signature": "(self, /, interval):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_data_block_index_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_data_block_index_type", "kind": "function", "doc": "Set the data block index type for point lookups:
\n\n\n DataBlockIndexType::BinarySearch
to use binary search within the data block.\n DataBlockIndexType::BinaryAndHash
to use the data block hash index in combination with\n the normal binary search.
\n
\n\nThe hash table utilization ratio is adjustable using set_data_block_hash_ratio
, which is\nvalid only when using DataBlockIndexType::BinaryAndHash
.
\n\nDefault: BinarySearch
\n\nExample:
\n\n\n ::
\n\nfrom rocksdict import BlockBasedOptions, BlockBasedIndexType, Options\n\nopts = Options()\nblock_opts = BlockBasedOptions()\nblock_opts.set_data_block_index_type(DataBlockIndexType.binary_and_hash())\nblock_opts.set_data_block_hash_ratio(0.85)\nopts.set_block_based_table_factory(block_opts)\n
\n
\n", "signature": "(self, /, index_type):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_data_block_hash_ratio", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_data_block_hash_ratio", "kind": "function", "doc": "Set the data block hash index utilization ratio.
\n\nThe smaller the utilization ratio, the less hash collisions happen, and so reduce the risk for a\npoint lookup to fall back to binary search due to the collisions. A small ratio means faster\nlookup at the price of more space overhead.
\n\nDefault: 0.75
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedOptions.set_checksum_type", "modulename": "rocksdict", "qualname": "BlockBasedOptions.set_checksum_type", "kind": "function", "doc": "Use the specified checksum type.\nNewly created table files will be protected with this checksum type.\nOld table files will still be readable, even though they have different checksum type.
\n", "signature": "(self, /, checksum_type):", "funcdef": "def"}, {"fullname": "rocksdict.PlainTableFactoryOptions", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions", "kind": "class", "doc": "Used with DBOptions::set_plain_table_factory.\nSee official wiki for more\ninformation.
\n\nDefaults:
\n\n\n user_key_length: 0 (variable length)\n bloom_bits_per_key: 10\n hash_table_ratio: 0.75\n index_sparseness: 16
\n
\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.full_scan_mode", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.full_scan_mode", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.encoding_type", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.encoding_type", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.hash_table_ratio", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.hash_table_ratio", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.store_index_in_file", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.store_index_in_file", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.index_sparseness", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.index_sparseness", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.bloom_bits_per_key", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.bloom_bits_per_key", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.user_key_length", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.user_key_length", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.PlainTableFactoryOptions.huge_page_tlb_size", "modulename": "rocksdict", "qualname": "PlainTableFactoryOptions.huge_page_tlb_size", "kind": "variable", "doc": "\n"}, {"fullname": "rocksdict.CuckooTableOptions", "modulename": "rocksdict", "qualname": "CuckooTableOptions", "kind": "class", "doc": "Configuration of cuckoo-based storage.
\n"}, {"fullname": "rocksdict.CuckooTableOptions.set_hash_ratio", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_hash_ratio", "kind": "function", "doc": "Determines the utilization of hash tables. Smaller values\nresult in larger hash tables with fewer collisions.\nDefault: 0.9
\n", "signature": "(self, /, ratio):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_max_search_depth", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_max_search_depth", "kind": "function", "doc": "A property used by builder to determine the depth to go to\nto search for a path to displace elements in case of\ncollision. See Builder.MakeSpaceForKey method. Higher\nvalues result in more efficient hash tables with fewer\nlookups but take more time to build.\nDefault: 100
\n", "signature": "(self, /, depth):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_cuckoo_block_size", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_cuckoo_block_size", "kind": "function", "doc": "In case of collision while inserting, the builder\nattempts to insert in the next cuckoo_block_size\nlocations before skipping over to the next Cuckoo hash\nfunction. This makes lookups more cache friendly in case\nof collisions.\nDefault: 5
\n", "signature": "(self, /, size):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_identity_as_first_hash", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_identity_as_first_hash", "kind": "function", "doc": "If this option is enabled, user key is treated as uint64_t and its value\nis used as hash value directly. This option changes builder's behavior.\nReader ignore this option and behave according to what specified in\ntable property.\nDefault: false
\n", "signature": "(self, /, flag):", "funcdef": "def"}, {"fullname": "rocksdict.CuckooTableOptions.set_use_module_hash", "modulename": "rocksdict", "qualname": "CuckooTableOptions.set_use_module_hash", "kind": "function", "doc": "If this option is set to true, module is used during hash calculation.\nThis often yields better space efficiency at the cost of performance.\nIf this option is set to false, # of entries in table is constrained to\nbe power of two, and bit and is used to calculate hash, which is faster in general.\nDefault: true
\n", "signature": "(self, /, flag):", "funcdef": "def"}, {"fullname": "rocksdict.UniversalCompactOptions", "modulename": "rocksdict", "qualname": "UniversalCompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.UniversalCompactOptions.max_size_amplification_percent", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.max_size_amplification_percent", "kind": "variable", "doc": "sets the size amplification.
\n\nIt is defined as the amount (in percentage) of\nadditional storage needed to store a single byte of data in the database.\nFor example, a size amplification of 2% means that a database that\ncontains 100 bytes of user-data may occupy upto 102 bytes of\nphysical storage. By this definition, a fully compacted database has\na size amplification of 0%. Rocksdb uses the following heuristic\nto calculate size amplification: it assumes that all files excluding\nthe earliest file contribute to the size amplification.
\n\nDefault: 200, which means that a 100 byte database could require upto 300 bytes of storage.
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.compression_size_percent", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.compression_size_percent", "kind": "variable", "doc": "Sets the percentage of compression size.
\n\nIf this option is set to be -1, all the output files\nwill follow compression type specified.
\n\nIf this option is not negative, we will try to make sure compressed\nsize is just above this value. In normal cases, at least this percentage\nof data will be compressed.\nWhen we are compacting to a new file, here is the criteria whether\nit needs to be compressed: assuming here are the list of files sorted\nby generation time:\n A1...An B1...Bm C1...Ct\nwhere A1 is the newest and Ct is the oldest, and we are going to compact\nB1...Bm, we calculate the total size of all the files as total_size, as\nwell as the total size of C1...Ct as total_C, the compaction output file\nwill be compressed iff\n total_C / total_size < this percentage
\n\nDefault: -1
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.stop_style", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.stop_style", "kind": "variable", "doc": "Sets the algorithm used to stop picking files into a single compaction run.
\n\nDefault: ::Total
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.min_merge_width", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.min_merge_width", "kind": "variable", "doc": "Sets the minimum number of files in a single compaction run.
\n\nDefault: 2
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.size_ratio", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.size_ratio", "kind": "variable", "doc": "Sets the percentage flexibility while comparing file size.\nIf the candidate file(s) size is 1% smaller than the next file's size,\nthen include next file into this candidate set.
\n\nDefault: 1
\n"}, {"fullname": "rocksdict.UniversalCompactOptions.max_merge_width", "modulename": "rocksdict", "qualname": "UniversalCompactOptions.max_merge_width", "kind": "variable", "doc": "Sets the maximum number of files in a single compaction run.
\n\nDefault: UINT_MAX
\n"}, {"fullname": "rocksdict.UniversalCompactionStopStyle", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.UniversalCompactionStopStyle.similar", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle.similar", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.UniversalCompactionStopStyle.total", "modulename": "rocksdict", "qualname": "UniversalCompactionStopStyle.total", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform", "modulename": "rocksdict", "qualname": "SliceTransform", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.SliceTransform.create_fixed_prefix", "modulename": "rocksdict", "qualname": "SliceTransform.create_fixed_prefix", "kind": "function", "doc": "\n", "signature": "(len):", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform.create_max_len_prefix", "modulename": "rocksdict", "qualname": "SliceTransform.create_max_len_prefix", "kind": "function", "doc": "prefix max length at len
. If key is longer than len
,\nthe prefix will have length len
, if key is shorter than len
,\nthe prefix will have the same length as len
.
\n", "signature": "(len):", "funcdef": "def"}, {"fullname": "rocksdict.SliceTransform.create_noop", "modulename": "rocksdict", "qualname": "SliceTransform.create_noop", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DataBlockIndexType", "modulename": "rocksdict", "qualname": "DataBlockIndexType", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.DataBlockIndexType.binary_search", "modulename": "rocksdict", "qualname": "DataBlockIndexType.binary_search", "kind": "function", "doc": "Use binary search when performing point lookup for keys in data blocks.\nThis is the default.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DataBlockIndexType.binary_and_hash", "modulename": "rocksdict", "qualname": "DataBlockIndexType.binary_and_hash", "kind": "function", "doc": "Appends a compact hash table to the end of the data block for efficient indexing. Backwards\ncompatible with databases created without this feature. Once turned on, existing data will\nbe gradually converted to the hash index format.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType", "modulename": "rocksdict", "qualname": "BlockBasedIndexType", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.BlockBasedIndexType.binary_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.binary_search", "kind": "function", "doc": "A space efficient index block that is optimized for\nbinary-search-based index.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType.hash_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.hash_search", "kind": "function", "doc": "The hash index, if enabled, will perform a hash lookup if\na prefix extractor has been provided through Options::set_prefix_extractor.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BlockBasedIndexType.two_level_index_search", "modulename": "rocksdict", "qualname": "BlockBasedIndexType.two_level_index_search", "kind": "function", "doc": "A two-level index implementation. Both levels are binary search indexes.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Cache", "modulename": "rocksdict", "qualname": "Cache", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.Cache.new_hyper_clock_cache", "modulename": "rocksdict", "qualname": "Cache.new_hyper_clock_cache", "kind": "function", "doc": "Creates a HyperClockCache with capacity in bytes.
\n\nestimated_entry_charge
is an important tuning parameter. The optimal\nchoice at any given time is\n(cache.get_usage() - 64 * cache.get_table_address_count()) /\ncache.get_occupancy_count()
, or approximately cache.get_usage() /\ncache.get_occupancy_count()
.
\n\nHowever, the value cannot be changed dynamically, so as the cache\ncomposition changes at runtime, the following tradeoffs apply:
\n\n\n- If the estimate is substantially too high (e.g., 25% higher),\nthe cache may have to evict entries to prevent load factors that\nwould dramatically affect lookup times.
\n- If the estimate is substantially too low (e.g., less than half),\nthen meta data space overhead is substantially higher.
\n
\n\nThe latter is generally preferable, and picking the larger of\nblock size and meta data block size is a reasonable choice that\nerrs towards this side.
\n", "signature": "(capacity, estimated_entry_charge):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.get_usage", "modulename": "rocksdict", "qualname": "Cache.get_usage", "kind": "function", "doc": "Returns the Cache memory usage
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.get_pinned_usage", "modulename": "rocksdict", "qualname": "Cache.get_pinned_usage", "kind": "function", "doc": "Returns pinned memory usage
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Cache.set_capacity", "modulename": "rocksdict", "qualname": "Cache.set_capacity", "kind": "function", "doc": "Sets cache capacity
\n", "signature": "(self, /, capacity):", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType", "modulename": "rocksdict", "qualname": "ChecksumType", "kind": "class", "doc": "Used by BlockBasedOptions::set_checksum_type.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- NoChecksum
\n- CRC32c
\n- XXHash
\n- XXHash64
\n- XXH3
\n
\n"}, {"fullname": "rocksdict.ChecksumType.no_checksum", "modulename": "rocksdict", "qualname": "ChecksumType.no_checksum", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.crc32c", "modulename": "rocksdict", "qualname": "ChecksumType.crc32c", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxhash", "modulename": "rocksdict", "qualname": "ChecksumType.xxhash", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxhash64", "modulename": "rocksdict", "qualname": "ChecksumType.xxhash64", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.ChecksumType.xxh3", "modulename": "rocksdict", "qualname": "ChecksumType.xxh3", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle", "modulename": "rocksdict", "qualname": "DBCompactionStyle", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- Level
\n- Universal
\n- Fifo
\n
\n\nBelow is an example to set compaction style to Fifo.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_compaction_style(DBCompactionStyle.fifo())\n
\n
\n"}, {"fullname": "rocksdict.DBCompactionStyle.level", "modulename": "rocksdict", "qualname": "DBCompactionStyle.level", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle.universal", "modulename": "rocksdict", "qualname": "DBCompactionStyle.universal", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompactionStyle.fifo", "modulename": "rocksdict", "qualname": "DBCompactionStyle.fifo", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType", "modulename": "rocksdict", "qualname": "DBCompressionType", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCall the corresponding functions of each\nto get one of the following.
\n\n\n- None
\n- Snappy
\n- Zlib
\n- Bz2
\n- Lz4
\n- Lz4hc
\n- Zstd
\n
\n\nBelow is an example to set compression type to Snappy.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_compression_type(DBCompressionType.snappy())\n
\n
\n"}, {"fullname": "rocksdict.DBCompressionType.none", "modulename": "rocksdict", "qualname": "DBCompressionType.none", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.snappy", "modulename": "rocksdict", "qualname": "DBCompressionType.snappy", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.zlib", "modulename": "rocksdict", "qualname": "DBCompressionType.zlib", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.bz2", "modulename": "rocksdict", "qualname": "DBCompressionType.bz2", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.lz4", "modulename": "rocksdict", "qualname": "DBCompressionType.lz4", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.lz4hc", "modulename": "rocksdict", "qualname": "DBCompressionType.lz4hc", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBCompressionType.zstd", "modulename": "rocksdict", "qualname": "DBCompressionType.zstd", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode", "modulename": "rocksdict", "qualname": "DBRecoveryMode", "kind": "class", "doc": "This is to be treated as an enum.
\n\nCalling the corresponding functions of each\nto get one of the following.
\n\n\n- TolerateCorruptedTailRecords
\n- AbsoluteConsistency
\n- PointInTime
\n- SkipAnyCorruptedRecord
\n
\n\nBelow is an example to set recovery mode to PointInTime.
\n\nExample:
\n\n\n ::
\n\nopt = Options()\nopt.set_wal_recovery_mode(DBRecoveryMode.point_in_time())\n
\n
\n"}, {"fullname": "rocksdict.DBRecoveryMode.tolerate_corrupted_tail_records", "modulename": "rocksdict", "qualname": "DBRecoveryMode.tolerate_corrupted_tail_records", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.absolute_consistency", "modulename": "rocksdict", "qualname": "DBRecoveryMode.absolute_consistency", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.point_in_time", "modulename": "rocksdict", "qualname": "DBRecoveryMode.point_in_time", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.DBRecoveryMode.skip_any_corrupted_record", "modulename": "rocksdict", "qualname": "DBRecoveryMode.skip_any_corrupted_record", "kind": "function", "doc": "\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Env", "modulename": "rocksdict", "qualname": "Env", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.Env.mem_env", "modulename": "rocksdict", "qualname": "Env.mem_env", "kind": "function", "doc": "Returns a new environment that stores its data in memory and delegates\nall non-file-storage tasks to base_env.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_background_threads", "modulename": "rocksdict", "qualname": "Env.set_background_threads", "kind": "function", "doc": "Sets the number of background worker threads of a specific thread pool for this environment.\nLOW
is the default pool.
\n\nDefault: 1
\n", "signature": "(self, /, num_threads):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_high_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_high_priority_background_threads", "kind": "function", "doc": "Sets the size of the high priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_low_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_low_priority_background_threads", "kind": "function", "doc": "Sets the size of the low priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.set_bottom_priority_background_threads", "modulename": "rocksdict", "qualname": "Env.set_bottom_priority_background_threads", "kind": "function", "doc": "Sets the size of the bottom priority thread pool that can be used to\nprevent compactions from stalling memtable flushes.
\n", "signature": "(self, /, n):", "funcdef": "def"}, {"fullname": "rocksdict.Env.join_all_threads", "modulename": "rocksdict", "qualname": "Env.join_all_threads", "kind": "function", "doc": "Wait for all threads started by StartThread to terminate.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_thread_pool_io_priority", "modulename": "rocksdict", "qualname": "Env.lower_thread_pool_io_priority", "kind": "function", "doc": "Lowering IO priority for threads from the specified pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_high_priority_thread_pool_io_priority", "modulename": "rocksdict", "qualname": "Env.lower_high_priority_thread_pool_io_priority", "kind": "function", "doc": "Lowering IO priority for high priority thread pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_thread_pool_cpu_priority", "modulename": "rocksdict", "qualname": "Env.lower_thread_pool_cpu_priority", "kind": "function", "doc": "Lowering CPU priority for threads from the specified pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.Env.lower_high_priority_thread_pool_cpu_priority", "modulename": "rocksdict", "qualname": "Env.lower_high_priority_thread_pool_cpu_priority", "kind": "function", "doc": "Lowering CPU priority for high priority thread pool.
\n", "signature": "(self, /):", "funcdef": "def"}, {"fullname": "rocksdict.FifoCompactOptions", "modulename": "rocksdict", "qualname": "FifoCompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.FifoCompactOptions.max_table_files_size", "modulename": "rocksdict", "qualname": "FifoCompactOptions.max_table_files_size", "kind": "variable", "doc": "Sets the max table file size.
\n\nOnce the total sum of table files reaches this, we will delete the oldest\ntable file
\n\nDefault: 1GB
\n"}, {"fullname": "rocksdict.CompactOptions", "modulename": "rocksdict", "qualname": "CompactOptions", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.CompactOptions.set_exclusive_manual_compaction", "modulename": "rocksdict", "qualname": "CompactOptions.set_exclusive_manual_compaction", "kind": "function", "doc": "If more than one thread calls manual compaction,\nonly one will actually schedule it while the other threads will simply wait\nfor the scheduled manual compaction to complete. If exclusive_manual_compaction\nis set to true, the call will disable scheduling of automatic compaction jobs\nand wait for existing automatic compaction jobs to finish.
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_bottommost_level_compaction", "modulename": "rocksdict", "qualname": "CompactOptions.set_bottommost_level_compaction", "kind": "function", "doc": "Sets bottommost level compaction.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_change_level", "modulename": "rocksdict", "qualname": "CompactOptions.set_change_level", "kind": "function", "doc": "If true, compacted files will be moved to the minimum level capable\nof holding the data or given level (specified non-negative target_level).
\n", "signature": "(self, /, v):", "funcdef": "def"}, {"fullname": "rocksdict.CompactOptions.set_target_level", "modulename": "rocksdict", "qualname": "CompactOptions.set_target_level", "kind": "function", "doc": "If change_level is true and target_level have non-negative value, compacted\nfiles will be moved to target_level.
\n", "signature": "(self, /, lvl):", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction", "kind": "class", "doc": "\n"}, {"fullname": "rocksdict.BottommostLevelCompaction.skip", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.skip", "kind": "function", "doc": "Skip bottommost level compaction
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.if_have_compaction_filter", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.if_have_compaction_filter", "kind": "function", "doc": "Only compact bottommost level if there is a compaction filter\nThis is the default option
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.force", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.force", "kind": "function", "doc": "Always compact bottommost level
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.BottommostLevelCompaction.force_optimized", "modulename": "rocksdict", "qualname": "BottommostLevelCompaction.force_optimized", "kind": "function", "doc": "Always compact bottommost level but in bottommost level avoid\ndouble-compacting files created in the same compaction
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.KeyEncodingType", "modulename": "rocksdict", "qualname": "KeyEncodingType", "kind": "class", "doc": "Used in PlainTableFactoryOptions
.
\n"}, {"fullname": "rocksdict.KeyEncodingType.plain", "modulename": "rocksdict", "qualname": "KeyEncodingType.plain", "kind": "function", "doc": "Always write full keys.
\n", "signature": "():", "funcdef": "def"}, {"fullname": "rocksdict.KeyEncodingType.prefix", "modulename": "rocksdict", "qualname": "KeyEncodingType.prefix", "kind": "function", "doc": "Find opportunities to write the same prefix for multiple rows.
\n", "signature": "():", "funcdef": "def"}];
// mirrored in build-search-index.js (part 1)
// Also split on html tags. this is a cheap heuristic, but good enough.