-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memdb: Make iterators not channel based #188
Comments
Looked into this some more, it definitely appears like our benchmark delay (50x) is coming from here, and this probably isnt from unclosed iterators. In the benchmark, there are 859926 created iterators, and the internal function registered closing at least 846118 of them, I highly doubt that the last 13k unclosed benchmarks is the delay, seeing as pprof shows the delay is in mstart. Unfortunately the way google/btree's iterator is written essentially forces the iterator to be written the way it is. However, I think the performance demand here merits switching the library. I recommend we switch to tidwall's btree library (which is imo a significant improvement over google's btree), and as a first pass use the path hint logic to make this iterator, fully single threaded, just via Then if/when we want more performance, its a very easy lift to just make the |
If the change can be introduced in a non breaking way we could make this happen, otherwise a similar issue should be opened in the sdk as the Memdb was added there as well. Love to see a db meant for testing purposes used in prod lol |
Oh yikes lol. I thought it was already used throughout the codebase, otherwise would've just re-imported a BTree :( This can be added in a non-breaking way |
the memdb in the sdk is slated for another release. IF you want this change now, it would have to happen here |
agreed/understood, working on it here! |
No perms to re-open this. Can this remain open though, I am very much hoping to find someone with availability to work on this, its very much needed. This problem causes golang's race detector to catch lots of false positives =/ |
I was investigating with traces, and basically ~all of our time is spent in deadtime with these chan.recv' and mutex unlocks aligning, and processes starting and stopping, between here and the IAVL iterator. This would also explain all the crazy delays in queries that involve iteration. I'm going to prioritize patches for these, maybe our nodes will finally be saved 😅 |
@ValarDragon I missed this somehow. We'll begin work on it, but we expect this to be state breaking, right? |
Its not state breaking. This purely an in memory representation. I don't think this is where the big wins are though, as it only impacts cacheKV store. |
The thinking is to start at the bottom, and work our way up-- especially since I'm learning as I go. I figure that even if this dependency (tm-db) is slated to be deprecated, it has a year left in its lifespan (time wasted on perf) x 100+ engineers = worth it I guess? |
Ascend and descend are now public functions in tidwall/btree |
Yes, but the CacheKV store is not where any bottleneck is coming from right now. |
but there's no |
Currently the memdb iterator is channel based: https://github.com/tendermint/tm-db/blob/master/memdb/iterator.go#L41-L86
This causes an extra thread lying around, and lots of lost dead time in dealing with channel wait times, in what should be a really fast hot loop. In the cacheKVStore usecase, the single core time for a Next() operation is 2-3x worse than before, and its actually using two threads so potentially 4-6x (perhaps double-counting, its a bit hard haha). The actual execution time looking at pprof has ~no time spent in the actual btree methods here.
This extra thread also makes debugging performance issues harder, and saturates other cores, which is pretty sub-ideal as we try to push more code to be multi-threaded!
The only thing that the channel structure buys, is letting two different processes read from the same iterator, with their reads being interleaved. This does not make sense as a usecase to me. You really shouldn't have multiple parallel processes reading from the same iterator in a blocking fashion as a general API, if this is needed it should be pushed to the iterator consumer. In general, parallelism should be pushing for each parallel process 'owning' the data its processing, and care being placed when it doesn't get to own the data for the duration of its execution. (E.g. rust's par_iter_mut has each process get one 'chunk' of the iterable space, not interleaved. This is beneficial for the data ownership / simplicity, reduced dead time, and cache efficiency)
The text was updated successfully, but these errors were encountered: