You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
GM! We're re-syncing one of our Arbi nodes (offchainlabs/nitro-node:v3.2.1-d81324d-amd64) with Pebble/PBSS and this all goes well up until around ~230M blocks, where the sync process slows down to what is effectively 4 blocks/sec., as per logs:
nitro-node-1 | INFO [09-11|08:43:36.506] created block l2Block=230,137,454 l2BlockHash=a7e0c2..4b27c9
nitro-node-1 | INFO [09-11|08:43:37.506] created block l2Block=230,137,458 l2BlockHash=e22537..011543
nitro-node-1 | INFO [09-11|08:43:38.507] created block l2Block=230,137,462 l2BlockHash=8e74e8..45aa18
nitro-node-1 | INFO [09-11|08:43:39.508] created block l2Block=230,137,466 l2BlockHash=1977e8..27c462
nitro-node-1 | INFO [09-11|08:43:40.509] created block l2Block=230,137,470 l2BlockHash=e609fa..f7f786
nitro-node-1 | INFO [09-11|08:43:41.510] created block l2Block=230,137,474 l2BlockHash=ee78c1..7ff016
nitro-node-1 | INFO [09-11|08:43:42.511] created block l2Block=230,137,478 l2BlockHash=c66ec6..e9f955
nitro-node-1 | INFO [09-11|08:43:43.511] created block l2Block=230,137,481 l2BlockHash=1dd312..bdf686
We tried several L1 endpoints, both third party and several of our own. The sync speed was not affected and remained at ~4 blocks/sec.
To Reproduce
Steps to reproduce the behavior:
Set --execution.caching.state-scheme=path
Set --persistent.db-engine=leveldb
Set --init.empty
Sync.
Expected behavior
A synchronized node to the current HEAD.
Not wanting to give up on this, we synced up a new node with standard leveldb/hash using the built-in latest-snapshot-downloader feature and that worked very well. My sincerest compliments. Other networks should take note. Being a sweet summer child deep down inside, I wanted to see if this would also work with PebbleDB/PBSS flags set. It did not (obviously, as it downloads a leveldb/hash snapshot), but worth a shot seeing as the sync process is so smooth.
We'd love to start using PebbleDB/PBSS as it's far superior in terms of stability, data integrity, and storage size.
Are we missing something? Is there a setting somewhere that needs to be set or tweaked?
Thank you, and have a great day ahead.
The text was updated successfully, but these errors were encountered:
northwestnodes-eric
changed the title
PebbleDB + PBSS sync does not work
PebbleDB + PBSS sync does not work?
Oct 1, 2024
--execution.caching.state-scheme=path is currently experimental, and currently has poor performance for Arbitrum chains. It is not recommended to use any parameter not explicitly mentioned on https://docs.arbitrum.io
Describe the bug
GM! We're re-syncing one of our Arbi nodes (
offchainlabs/nitro-node:v3.2.1-d81324d-amd64
) with Pebble/PBSS and this all goes well up until around ~230M blocks, where the sync process slows down to what is effectively 4 blocks/sec., as per logs:We tried several L1 endpoints, both third party and several of our own. The sync speed was not affected and remained at ~4 blocks/sec.
To Reproduce
Steps to reproduce the behavior:
--execution.caching.state-scheme=path
--persistent.db-engine=leveldb
--init.empty
Expected behavior
A synchronized node to the current HEAD.
Screenshots
N/A
Additional context
These are our run flags:
Not wanting to give up on this, we synced up a new node with standard leveldb/hash using the built-in latest-snapshot-downloader feature and that worked very well. My sincerest compliments. Other networks should take note. Being a sweet summer child deep down inside, I wanted to see if this would also work with PebbleDB/PBSS flags set. It did not (obviously, as it downloads a leveldb/hash snapshot), but worth a shot seeing as the sync process is so smooth.
We'd love to start using PebbleDB/PBSS as it's far superior in terms of stability, data integrity, and storage size.
Are we missing something? Is there a setting somewhere that needs to be set or tweaked?
Thank you, and have a great day ahead.
The text was updated successfully, but these errors were encountered: