-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggressively shrink ancient storages when shrink isn't too busy. #2946
base: master
Are you sure you want to change the base?
Conversation
9a2729a
to
838e063
Compare
4d869c2
to
003ba1f
Compare
@dmakarov, where did you end up on this? We can move this forward monday. Hopefully you have a machine running this? |
I had it running on my dev machine for a few days. Now I added stat counters and will restart it with the new stats. I’d like to experiment a bit more with this. |
b18e934
to
1bfd6d0
Compare
please add a test that shows we add an ancient storage to shrink. May be helpful to refactor the shrink fn so that it calculates the storages to shrink in a separate fn so we can just check the output of that fn. Or, you can do a more full test which actually verifies the capacity is what you expect after running shrink. There should be tests that do this similarly already. Look for tests that call |
yes, I'm working on it. |
this pr should have zero effect unless skipping rewrites is enabled by cli. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea in the PR looks good to me.
I don't see too much downside for adding one ancient to shrink when we are not busy.
accounts-db/src/accounts_db.rs
Outdated
&& *capacity == store.capacity() | ||
&& Self::is_candidate_for_shrink(self, &store) | ||
{ | ||
*capacity = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we not overload the u64, and instead create an enum to indicate if this storage is pre or post shrunk?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know. Isn't capacity checked for being 0 in other logic, so that if we add an enum we still have to set capacity to 0 here for other code to work correctly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we are overloading the 0 here. Yes. We could do an enum and it would be much more clear what we're trying to do.
{AlreadyShrunk, CanBeShrunk(capacity: u64)}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could also just remove the element from the vec, but that could be expsneive. I was assuming marking it as 'already shrunk' would be sufficient. maybe none of htis is necessary because we'll see that the new capacity doesn't match the old capacity and skip it anyway... Then we don't need to iter mut at all and we can just iter. That seems simplest of all and we already have to handle that case anyway.
This does cause us to look up way more storages.
An oldie but goodie: https://en.wikichip.org/wiki/schlemiel_the_painter%27s_algorithm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the suggested change? Not to change capacity?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
enum is fine with me. iterating. alternatively, vec sorted in reverse and you pop the last one off the end and reduce the count. This would not require a re-allocation and would avoid revisiting ancient storages we already previously shrunk.
it looks like i need to rebase to fix the vulnerability check errors. |
rebased to resolve conflicts. I'm still working on unit tests. When ready, I'll renew review requests. Thanks. |
conflicts --> no choice but rebase. |
Do you mean the capacity of an ancient slot added to the shrink candidates should be 0, or the test should verify some other capacity? If the latter what capacity did you mean? |
the capacity of the storage itself shrinks to what you'd expect. |
accounts-db/src/accounts_db.rs
Outdated
// At this point the dead ancient account should be removed | ||
// and storage capacity shrunk to the sum of live bytes of | ||
// accounts it holds. This is the data lengths of the | ||
// accounts plus the length of their metadata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we scan the storage to ensure the modified_account_pubkey
is not present?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you have any particular scan method in mind or just iterating over stored_accounts? please, be more specific.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also can you explain why checking the capacity of the storage is not sufficient in this specific test case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you explain why checking the capacity of the storage is not sufficient in this specific test case?
So for two reasons:
- The sizes are specific to the AppendVec storage format. This means when we move to Tiered Storage, we'll need to tweak the test to support both formats, as the stored sizes will be different.
- To be defensive against refactors. To me the test reads as it is trying to ensure the dead ancient account has been removed from the shrunk storage. And the way to guarantee this is by checking the accounts in the storage and making sure none of them are the to-be-removed one.
do you have any particular scan method in mind or just iterating over stored_accounts?
Yes, I think iterating over stored accounts will work here.
Another option would be to do something like this:
store.accounts.scan_pubkeys(|pubkey| {
assert_ne!(pubkey, modified_account_pubkey);
});
And more info related to the 136 byte storage overhead. In append_vec.rs
, there is this function:
append_vec::aligned_stored_size(data_len)
which can handle both the 136 byte overhead and the 8-byte alignment requirement for AppendVec entries. So we could use that in the assert as:
assert_eq!(created_accounts.capacity, append_vec::aligned_stored_size(1000) + append_vec::aligned_stored_size(2000));
( | ||
"initial_candidates_count", | ||
self.initial_candidates_count.swap(0, Ordering::Relaxed), | ||
i64 | ||
), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can all the changes to this file be reverted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like there's no functional changes, just moving the lines. But I still had to look and spend time to realize there were not actual changes to the metrics. IMO feels orthogonal to the problem that this PR is solving. Or in other words, I don't understand why this file was changed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reviewed the new test. It looks correct to me.
Thanks!
Problem
Ancient packing when skipping rewrites has some non-ideal behavior.
It can sometimes be true that an ancient storage might never meet the 90%(?) threshold for shrinking. However, every dead account that an ancient storage keeps present causes the account to remain in the index in memory and starts a chain reaction of other accounts, such as zero lamport accounts, that must be kept alive.
Summary of Changes
Add another slot for shrinking when the number of shrink candidate slots is too small (less than 10). The additional slot's storage has the largest number of dead bytes. This aggressively shrinks ancient storages, even when they are below the normal threshold. This allows the system to keep itself towards the ideal of storing each non-zero account once and having no zero lamport accounts.
reworked #2849