-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache transmissions for faster processing #3395
base: mainnet-staging
Are you sure you want to change the base?
Conversation
minimize diffs fix test code fix fmt don't update cache cache access speedup
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a few suggestions, other than that LGTM in general 👍.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 👍
CI is failing. I suggest running |
@@ -29,6 +29,9 @@ workspace = true | |||
version = "2.1" | |||
features = [ "serde", "rayon" ] | |||
|
|||
[dependencies.lru] | |||
version = "0.12.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to change it but just as a note to reviewers, the latest version (which will be used by cargo when building) is 0.12.4
.
8f272d5
to
bbd10c1
Compare
Thanks for the feedback! I've updated the code. The |
Motivation
The
prepare_advance_to_next_quorum_block
function incurs significant processing time, which contributes to increased block times. Based on a flamegraph analysis, time-consuming operations are the deserialization of objects, or more general, retrieving transmissions from storage.Therefore, this PR introduces an LRU cache for transmissions inside the
BFTPersistentStorage
. The transmissions are inserted into the cache in theinsert_transmissions
function, alongside with the insertion into the persistent storage. Theget_transmission
function will first try to retrieve the transmissions from the LRU cache, and if it fails, try to retrieve it from the persistent storage.Test Plan
Ran it locally, and also ran a load test (high tps of
transfer_public
executions) on aws. Thereby, we observed a ~50% reduction in block times.Please note: The cache aims to cover two rounds of transmissions. As such, it can store
MAX_COMMITTEE_SIZE * MAX_TRANSMISSIONS_PER_BATCH * 2 = 1500
transmissions. At the currentMAX_TRANSACTION_SIZE
of 128 kB, this results in approximately 188 MB additional memory use.