-
Notifications
You must be signed in to change notification settings - Fork 49
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spin processors out into bespoke services #569
Comments
I ended up doing this now as the existing Processor was incapable of surviving an audit with any decent marks. The Ethereum integration was also horribly kludged in, with questions about how well it could actually be completed once token support was finished. The processor-smash branch represents a processor built as it should have been, not one built, one which demonstrated what needed to be built, and then modified to be so built. It has:
Of potential note is that the It is still a work in progress yet is about to wrap up. |
A single codebase to maintain and audit, an internal standard for interoperability, and a lack of duplicated code. This was the reasoning for writing the processor, singular.
To quote Douglas Adams,
What really highlighted this is the Ethereum integration. It does not benefit from most of the fee code, due to using a relayer system (meaning Serai does not have to deduct the fee, solely calculate a fair fee rate). It does not benefit from the scheduling due to being in under the account model. It doesn't have branch/change addresses, as it has a constant external address deterministic to the first key generated.
It's all of this which effectively justifies a bespoke processor for Ethereum. While we currently are simply setting the fee to 0, causing the processor's code to effectively NOP, we now have to review every route and ensure it's actually NOP'ing as we do still call it (and simply expect a lack of effects).
Then, while working on the Ethereum integration, I realized: #470 (comment)
To quote directly here,
And it's with this, I finally have to accept the design philosophy of Godot.
Any universal API we try to create will be one we fight. While the current API is fine, and with the Scheduler modularity achieves our necessary goals, the optimal solution for any given integration will be specific to that integration. Accordingly, the sane thing is for the processor to become building blocks. A key gen service. A UTXO log scheduler. A SC-based account scheduler. A relayer fee system. A deducted fee system. Etc.
I won't claim these can't be re-composed back to a monolithic service at some point. I will claim I don't believe it's feasible to modularize the current processor so incrementally to the solutions we want. We'd at least have to go back to the drawing board, explicitly declare optimal flows, then declare modules, then fit the processor. Alternatively, we can move to bespoke solutions (removing the requirement of maintaining a perfectly generic monolith), and when we have the time, later recompose.
I say that while we also can't so redo the processor before mainnet. This will probably be kicked to #565 which is slated for after mainnet :/
The text was updated successfully, but these errors were encountered: