-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Staking pallet #373
Staking pallet #373
Conversation
akildemir
commented
Sep 16, 2023
- Implements stake/unstake
- Allocate/deallocate to specific network validator set
- Validator set rotation for serai network
62aae98
to
2ad4ca2
Compare
substrate/staking/pallet/src/lib.rs
Outdated
// add to participants list for the network | ||
let result = VsPallet::<T>::add_participant(account, Amount(amount), network); | ||
if result.is_err() { | ||
Self::deallocate_internal(&account, amount).unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this returns an error, all state changes from the call will be discarded (since paritytech/substrate#11431). Accordingly, you should simply be able to ?
the error, without this restoration code (which may have its own bug).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you mean VsPallet::<T>::add_participant(account, Amount(amount), network)
(directly returns the DispatchResult
like in the deallocate
) and this will revert the allocation we did if it returns error?
// Since this participant will be included after the next session, save it for the set | ||
// where it will be active. | ||
let participant_set = | ||
ValidatorSet { session: Session(Self::current_session().0 + 2), network }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+ 1
if not Serai, as only Serai has this limitation?
Though also, it's frustrating that it'll take two weeks to join the Serai validator set. Do we want to consider halving the discussed session time from 7 days to 3.5, yet only rotating external networks every other session? That'd set a maximum of 7 days to join the Serai set (not 2 weeks), while maintaining the 7 day interval for external networks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yea we can set the rotation time to 3.5 days, but it might still be better for other networks to follow the serai net? that would purify the design, since all the commands would be coming from the session pallet. Otherwise we are kinda looking at implementing our own session pallet for other networks. Which if we were gonna do that, we can also use it for the serai net as well. But yea migration taking 6 hrs every 3.5 days is also a problem in that case.
ValidatorSet { session: Session(Self::current_session().0 + 2), network }; | ||
JoiningValidators::<T>::try_mutate_exists(participant_set, |existing| { | ||
if existing.is_some() { | ||
existing.as_mut().unwrap().push((account, amount)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They may already be present?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, this is an O(n) algorithm. If we move to keying off ValidatorSet and account, I believe you can have O(1) insertion (well, the complexity would be the underlying DBs which should be O(log n), yet we can handwave it) yet still trivially do an O(n) iteration over the storage prefix. We should at least TODO
for this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't understand why it would be O(n), we are not iterating through anything if I am not mistaken.
None => { | ||
// check whether it is still in the joining set before getting active | ||
let joining_set = JoiningValidators::<T>::get(key); | ||
*joining_set.iter().find(|p| p.0 == account).ok_or(Error::<T>::NonExistentValidator)? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems error-prone (removing EITHER from current or joining, yet still going through DeallocatingValidators
). If they have yet to join, can't we deallocate in place without DV?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uhh we should be able to through a callback? The problem was since staking -> vs-pallet, we couldn't call the staking pallet function. Idk how desirable a callback tho.
// append them all together | ||
joining.extend(prev.participants); | ||
let last_add_index = joining.len() - 1; | ||
joining.extend(deallocating); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is deallocating
merged into joining
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all 3 vectors merged into 1 to not repeat following block of code.
Uses the fact the underlying DB is sorted to achieve sorting of potential validators by stake. Removes release of deallocated stake for now.