Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filestore - shutdown #107

Open
awb99 opened this issue Jul 23, 2023 · 3 comments
Open

filestore - shutdown #107

awb99 opened this issue Jul 23, 2023 · 3 comments

Comments

@awb99
Copy link

awb99 commented Jul 23, 2023

The filestore supports async operations. I am trying to make sure my app closes down correctly
and I could not find a disconnect function of the filestore.

As such, it might happen, that when I shutdown my app that there are some threads that are still writing,
which upon terminating them forcefully might lead to a corruption in the filestore.

Any comments?
Thanks!!

@whilo
Copy link
Member

whilo commented Oct 25, 2023

I see, so far we have mostly done this to support freeing of an underlying resource so it can be reused, not necessarily to reject further operation on top of the store (assuming the process using it winds itself down correctly). Would this be feasible for you? It would be possible to add code to throw an exception after disconnect of course.

@awb99
Copy link
Author

awb99 commented Oct 27, 2023

I want to avoid corruption of konserve db. I would like to have a safe shutdown feature. Once "shutdown" is called then I am safe to terminate the app.

@whilo
Copy link
Member

whilo commented Oct 28, 2023

Single values in konserve are written atomically and you are safe to die at any point in operation without them being in a corrupted state (unless you use sync-blob? false or in-place? true). If you write multiple values that need to be consistent with each other then the disconnect function would also not guarantee safety, because you do not have control over failures in general. In this case you need to make sure that you first write everything you need to refer to in a new place and then do an atomic swap. This is what we do both in replikativ an datahike (i.e. first write all new persistent index nodes and then atomically swap a pointer of the db record itself). I think you should handle it similarly if you can. Does that make sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants