How to reduce memory footprint of the initial sync:
- disable rocksdb cache by parameter
-dbcache=0
, the default size is 500MB - run blockbook with parameter
-workers=1
. This disables bulk import mode, which caches a lot of data in memory (not in rocksdb cache). It will run about twice as slowly but especially for smaller blockchains it is no problem at all.
Please add your experience to this issue.
Blockbook was killed during the initial import, most commonly by OOM killer. By default, Blockbook performs the initial import in bulk import mode, which for performance reasons does not store all data immediately to the database. If Blockbook is killed during this phase, the database is left in an inconsistent state.
See above how to reduce the memory footprint, delete the database files and run the import again.
Check this or this issue for more info.
Your coin's block/transaction data may not be compatible with BitcoinParser
ParseBlock
/ParseTx
, which is used by default. In that case, implement your coin in a similar way we used in case of zcash and some other coins. The principle is not to parse the block/transaction data in Blockbook but instead to get parsed transactions as json from the backend.
Blockbook stores data the key-value store RocksDB. Database format is described here.