-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add backup and restore #1562
Conversation
1184c31
to
bf091de
Compare
0a1214b
to
bfae40b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Only went over the first commit so far... will continue later.
@@ -542,3 +553,55 @@ pub async fn collaborative_revert_confirm( | |||
})?; | |||
Ok(Json(serialize_hex(&raw_tx))) | |||
} | |||
|
|||
// TODO(holzeis): There is no reason the backup and restore api has to run on the coordinator. On | |||
// the contrary it would be much more reasonable to have the backup and restore api run separately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely. Maybe we should create from the getgo at least a different subdomain/path/port for the backup to be able to move it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we are already using a reverse proxy, we could simple direct any request to /api/backup
or /api/restore
to a different service once we move it. This could be completely transparent to the app.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is though that we are hardcoding the IP in the app. If we were to use backup.10101.finance
instead, we could move it indepdently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is though that we are hardcoding the IP in the app.
Agreed, but this is unrelated to this change. e.g. we could change the configuration to use the domain instead of the ip. e.g. api.10101.finance
and then configure the reverse proxy to route any traffic on api.10101.finance/api/backup
to another service.
One caveat I see in using a dedicated subdomain is that this would introduce another configuration. What benefits do you see in using a dedicated subdomain?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will be hard to roll this out at a later stage or move the backup to a new server without breaking existing apps.
i.e.
user_a
runs version 1.6.0 and backs-up his app on 10101.finance- we migrate the backup service to backup.10101.finance
user_a
cannot create/restore a backup before upgrading to the new version
I guess breaking changes are OK. Are we handling errors correctly, i.e. what if the backup service is not working (e.g. for an old version because we moved it already)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we migrate the backup service to backup.10101.finance
user_a cannot create/restore a backup before upgrading to the new version
I don't see an issue here. We can simply add an additional nginx config that points all the backup/restore requests from user_a to the new service.
But, let's not argue about that. If you feel strongly about it. I can add it as a follow up to this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see an issue here. We can simply add an additional nginx config that points all the backup/restore requests from user_a to the new service.
Can we do this if the new service runs on a different machine/ip?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we would need to configure a redirect then.
bfae40b
to
37b6cbe
Compare
@@ -38,25 +39,27 @@ class _LoadingScreenState extends State<LoadingScreen> { | |||
if (isSeedFilePresent) { | |||
runBackend(context).then((value) { | |||
logger.i("Backend started"); | |||
}); | |||
|
|||
if (!hasEmailAddress) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the size of the PR, I think these changes should be in its own PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm.. you mean the full backup could have been separated into a separate PR?
if (isFullBackupRequired) { | ||
setState(() => message = "Creating initial backup!"); | ||
fullBackup().then((value) { | ||
Preferences.instance.setFullBackupRequired(false).then((value) { | ||
start(context, position); | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic in this file is getting out of hand :(
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed 🙈. We probably should use some kind of wizard here.
Addressed with 168fcf4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pretty awesome. I tried it and it works like magic 🪄🚀🌕
620672c
to
71bed57
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for:
- Encrypted backup scheme.
- Numerous type parameters and trait bounds added in
ln-dlc-node
. Maybe unavoidable for now, but it makes me feel uneasy.
I did not review some of the unrelated changes to the main feature.
dlc-sled-storage-provider = { git = "https://github.com/p2pderivatives/rust-dlc", rev = "4e104b4" } | ||
p2pd-oracle-client = { git = "https://github.com/p2pderivatives/rust-dlc", rev = "4e104b4" } | ||
dlc-trie = { git = "https://github.com/p2pderivatives/rust-dlc", rev = "4e104b4" } | ||
simple-wallet = { git = "https://github.com/p2pderivatives/rust-dlc", rev = "4e104b4" } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🙃 It would have been nice to have a separate patch replacing these dependencies without changes. Then it's easier to judge the diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
55abcc3
to
139d91f
Compare
### Replace rust-dlc SledStorageProvider with DlcStorageProvider This removes the dependency to the `dlc-sled-storage-provider` and the `simple-wallet`, however, it does not change the underlying implementation. The new DlcStorageProvider is simply introducing a new abstraction layer to allow for replacing sled with another implementation in preparation to hook in the backup. ### Add data dir to config Note, I didn't put it on the environments on the client side as it requires the context to be initialized and that would have meant bigger changes, than I wanted to deal with. ### Verify users intent The users intent is verified by verifying the signature on the backup and restore messages. ### Symmetric backup cipher Backups are encrypted and decrypted on the client side. ### Add full backup support If the app is upgraded to the new version it will run an initial full backup to ensure that the backup is complete. - Combine states into a single state file For testing I had to make the state mutable as otherwise I wasn't able to start a new app node. ### Improve the app onboarding flow - The user should not be required to input the email when restoring from a backup. Note, the solution is a bit hacky, and ideally we also backup the preferences. - Removed app bar from welcome screen - Added visual improvements on the loading screen - Wait for restore / initial backup to complete on loading screen - Remove transitions from initial screens - Remove splash screen once on the loading screen
This prevents the candlestick widget not being sometimes not initialized
Setting viewportFraction: 1.0 uses the full space of the screen instead of only 80% (default).
139d91f
to
4016071
Compare
Backup and restore
rust-lightning
andrust-dlc
storage.SledStorageProvider
and aInMemoryDLCStoreProvider
implementation introducing an abstraction layer for storage.simple-wallet
anddlc-sled-storage-provider
crates.ln_dlc_node
crate indifferent about the used storage, allowing us to use in-memory implementations ofrust-lightning
andrust-dlc
data for the tests.RemoteBackupClient
implementation that sends backups asynchronously to the coordinator.rusqlite
dependency to create a read-only backup connection, that creates an online backup after the following events:PaymentClaimed
,PaymentSent
,PaymentFailed
,PositionUpdateNotification
,PositionClosedNotification
,OrderUpdateNotification
,OrderFilledWith
,SpendableOutputs
.user_backups
.Minor onboarding UI/UX improvements
Removed app bar from welcome screen
Compatibility
The change is fully compatible with older versions. Tested with an upgrade from 1.5.0. However, for the upgrade path we have to run a full backup at startup. This is achieved through a preference setting, which is by default true unless the user is creating a new wallet or restores a wallet.
The dlc-store of the actions did not adhere to the same pattern as other data stored in the dlc sled storage. (didn't use a tree, but rather stored the data directly into the sled db). I opted to ignore, that since handling that case would have made the implementation more complex, while the actions can be reconstructed in a
on_channel_reestablish
event.Testing
#[cfg(test)]
as the test are always compiled for#[cfg(not(test))]
.rust-dlc
, but noticed that they do not cover all scenarios. I found quite some issues during manual testing. However, my confidence in ourSledStorageProvider
is pretty high.TenTenOneInMemoryStorage
for theln_dlc_node
crate tests, meaning that norust-lightning
orrust-dlc
data is stored to disk during those tests.Future Work
Demo
feat/backup-app
backup_and_restore.mov