Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to fix for v7.9.2 #106

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 70 additions & 0 deletions docs/api/database.rst
Original file line number Diff line number Diff line change
Expand Up @@ -174,6 +174,76 @@ Database object

:rtype: :py:class:`rocksdb.BaseIterator`

.. py:method:: iterator(start=None, column_family=None, iterate_lower_bound=None, iterate_upper_bound=None, reverse=False, include_key=True, include_value=True, fill_cache=True, prefix_same_as_start=False, auto_prefix_mode=False)

:param start: prefix to seek to
:type start: bytes

:param column_family: column family handle
:type column_family: :py:class:`rocksdb.ColumnFamilyHandle`

:param iterate_lower_bound:
defines the smallest key at which the backward iterator can return an entry.
Once the bound is passed, Valid() will be false. `iterate_lower_bound` is
inclusive ie the bound value is a valid entry.
If prefix_extractor is not null, the Seek target and `iterate_lower_bound`
need to have the same prefix. This is because ordering is not guaranteed
outside of prefix domain.
:type iterate_lower_bound: bytes

:param iterate_upper_bound:
defines the extent up to which the forward iterator
can returns entries. Once the bound is reached, Valid() will be false.
"iterate_upper_bound" is exclusive ie the bound value is
not a valid entry. If prefix_extractor is not null:
1. If auto_prefix_mode = true, iterate_upper_bound will be used
to infer whether prefix iterating (e.g. applying prefix bloom filter)
can be used within RocksDB. This is done by comparing
iterate_upper_bound with the seek key.
2. If auto_prefix_mode = false, iterate_upper_bound only takes
effect if it shares the same prefix as the seek key. If
iterate_upper_bound is outside the prefix of the seek key, then keys
returned outside the prefix range will be undefined, just as if
iterate_upper_bound = null.
If iterate_upper_bound is not null, SeekToLast() will position the iterator
at the first key smaller than iterate_upper_bound.
:type iterate_upper_bound: bytes

:param reverse: run the iteration in reverse - using `reversed` is also supported
:type reverse: bool

:param include_key: the iterator should include the key in each iteration
:type include_key: bool

:param include_value: the iterator should include the value in each iteration
:type include_value: bool

:param fill_cache: Should the "data block"/"index block" read for this iteration be placed in
block cache? Callers may wish to set this field to false for bulk scans.
This would help not to the change eviction order of existing items in the
block cache. Default: true
:type fill_cache: bool

:param bool prefix_same_as_start:
Enforce that the iterator only iterates over the same prefix as the seek.
This option is effective only for prefix seeks, i.e. prefix_extractor is
non-null for the column family and total_order_seek is false. Unlike
iterate_upper_bound, prefix_same_as_start only works within a prefix
but in both directions. Default: false
:type prefix_same_as_start: bool

:param bool auto_prefix_mode: When true, by default use total_order_seek = true, and RocksDB can
selectively enable prefix seek mode if won't generate a different result
from total_order_seek, based on seek key, and iterator upper bound.
Not supported in ROCKSDB_LITE mode, in the way that even with value true
prefix mode is not used. Default: false
:type auto_prefix_mode: bool

:returns:
A iterator object which is valid and ready to begin using. It will be either a key, item or value
iterator depending on the arguments provided.
:rtype: :py:class:`rocksdb.BaseIterator`

.. py:method:: snapshot()

Return a handle to the current DB state.
Expand Down
4 changes: 2 additions & 2 deletions rocksdb/_rocksdb.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -1672,7 +1672,7 @@ cdef class DB(object):

def __dealloc__(self):
self.close()

def close(self):
cdef ColumnFamilyOptions copts
if not self.db == NULL:
Expand Down Expand Up @@ -2332,8 +2332,8 @@ cdef class BackupEngine(object):

c_backup_dir = path_to_string(backup_dir)
st = backup.BackupEngine_Open(
backup.BackupEngineOptions(c_backup_dir),
env.Env_Default(),
backup.BackupableDBOptions(c_backup_dir),
cython.address(self.engine))

check_status(st)
Expand Down
8 changes: 4 additions & 4 deletions rocksdb/backup.pxd
Original file line number Diff line number Diff line change
Expand Up @@ -9,11 +9,11 @@ from status cimport Status
from db cimport DB
from env cimport Env

cdef extern from "rocksdb/utilities/backupable_db.h" namespace "rocksdb":
cdef extern from "rocksdb/utilities/backup_engine.h" namespace "rocksdb":
ctypedef uint32_t BackupID

cdef cppclass BackupableDBOptions:
BackupableDBOptions(const string& backup_dir)
cdef cppclass BackupEngineOptions:
BackupEngineOptions(const string& backup_dir)

cdef struct BackupInfo:
BackupID backup_id
Expand All @@ -30,6 +30,6 @@ cdef extern from "rocksdb/utilities/backupable_db.h" namespace "rocksdb":
Status RestoreDBFromLatestBackup(string&, string&) nogil except+

cdef Status BackupEngine_Open "rocksdb::BackupEngine::Open"(
BackupEngineOptions&,
Env*,
BackupableDBOptions&,
BackupEngine**)
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@


extra_compile_args = [
'-std=c++11',
'-std=c++17',
'-O3',
'-Wall',
'-Wextra',
Expand Down