-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Design for a recovery feature using other entropy-tss nodes as key-providers #1247
Comments
Instead of requesting a key, would be an easier flow especially in the spin up to create your own key, store it in memory (in the tdx machine) and then send it to other validators to be stored in their kvdb at periodic intervals, which sounds less secure but because of the tdx it is more secure and for me at least easier to manage. This can also allow you to "reshare" this key as validators leave, and maybe at intervals like a pre update, can also maybe have the chain manage the state of who has what |
That being said, there is a chance of someone running multiple validators and getting the decryption key sent to themselves which I don't like |
I don't see why we couldn't do that. Either way, both tss-nodes end up knowing the key, i'm not sure if it matters so much which of them generated it. It would complicate things a little more as then you need separate http routes for backup and restore. I'm less keen on having the chain manage state. It would make it easier to report bad behavior, but i think it would make this feature take significantly longer to implement. Yeah validators leaving is the difficult bit with this, im not sure how we can know that a node we are using to store our key has unbonded without polling the chain. |
Thinking about it I am uncomfortable having other nodes store the password of other nodes in their kvdb because they would potentially have access to it outside of the kvdb. I propose having a node send their password and store it in the memory of another validator, by doing that they would only have access inside the tdx machine and would make the system more redundant without sacrificing security. They can also hold the password in their own memory and reshare it at intervals, maybe after every reshare to the new signers? or before an update etc |
Im not too worried about this, as in theory since the kvdb is encrypted the host operator should not have access to the backed-up keys. But to be on the safe side i have gone ahead and switched to storing them in-memory only. As for making multiple backups, im totally open to this but i propose to keep it as just one for now to keep things simple, and iterate on the design in followups. Im not very keen on the idea of specifically choosing the signers to hold backups as there is a conflict of interest: Each signer holds both a keyshare and the keys to decrypt the other keyshares. |
ya for sure, just talking about where I see this going |
One thing Im struggling with and just gonna write it out, is not keeping the tss keys in the kvdb and requiring a check with the chain to restart, this way we can be sure that nodes aren't colluding to skirt the TDX. That being said then really it is only the signers that need this as this would stop only a reshare if a node goes down and comes back up |
The tricky thing with doing an on-chain attestation on restart is that either:
|
We have been discussing the problem of not having encrypted persistent storage with which to store the keyshares, which means we are vulnerable to lose keyshares if the VM needs to restart.
This is a proposal for a possible design of a recovery feature.
Each entropy-tss node acts as a key server, providing symmetric encryption keys for persistent storage for other entropy-tss nodes.
On
entropy-tss
launch:/request-encryption-key
and giving our TSS account ID to be used as a lookup key under which to store this key for retrieval later, as well as a quote (the handler for this endpoint is explained below)./request-encryption-key
to get the key for the key-value store.In the
/request-encryption-key
HTTP route handler:This relies on there being at least two entropy-tss nodes at genesis - otherwise there is a chicken-and-egg problem where the first node does not have another node to use as a key server.
The text was updated successfully, but these errors were encountered: