You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#218 made a change for setup MaxOpenFiles to 4096, which makes the RAM usage could easily go over 64GB when our application doing a snapshot or the genesis state export that requires the DB iterating. We tried to reduce it to 1024 and it turns out we can run it within 30GB.
made each sst file target size equal to 64MB(if we look at how rocksdb decided the file size), therefore I am not sure to budget 2MB for each opened file is base on which file size?
We should consider the node's capability and decide which number is proper for running the rocksdb instead of a fixed number.
#218 made a change for setup
MaxOpenFiles
to 4096, which makes the RAM usage could easily go over 64GB when our application doing a snapshot or the genesis state export that requires the DB iterating. We tried to reduce it to 1024 and it turns out we can run it within 30GB.The setup here
tm-db/rocksdb.go
Line 45 in d1b9b74
We should consider the node's capability and decide which number is proper for running the rocksdb instead of a fixed number.
ref:
https://github.com/facebook/rocksdb/wiki/Memory-usage-in-RocksDB
The text was updated successfully, but these errors were encountered: