Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideas for how to do an optional backup? #6

Open
justinclift opened this issue Jul 27, 2023 · 19 comments
Open

Ideas for how to do an optional backup? #6

justinclift opened this issue Jul 27, 2023 · 19 comments

Comments

@justinclift
Copy link
Member

At present, the docker container does an in-place upgrade, with no capability for backing things up (just in case).

We should probably support some kind of useful, optional, backup capability.

So far, my initial thinking is to have some environment variable to define a new mount point (PGAUTO_BACKUPDIR or similar), and if that's set to a valid value pointing at an empty directory then we use it.

For the backup type, I reckon we should just go with a "full data copy" approach rather than a pgdump. At least for now.

It should (in theory) be reasonably simple. We'd use that "backupdir" as part of the pg_upgrade file moving process, running the pg_upgrade without the --link option so things aren't upgraded in place.

That way, the backup gets done, and the new database ends up in the correct directory. The downsides are the (potentially significant) extra space that gets used, and the extra time the whole upgrade process will take.

Optimally, we'd probably recommend people use a 2nd bind mounted directory for that "backupdir", so it's not a backup that disappears when the container stops.

Thoughts? 😄

@justinclift justinclift changed the title Ideas on how to do an optional backup? Ideas for how to do an optional backup? Jul 27, 2023
@brbrown25
Copy link

I think it makes sense to have a backupdir option and maybe an auto recover option, so if the upgrade fails it restores the backup into the original location.

@justinclift
Copy link
Member Author

Interesting idea. Yeah, it probably wouldn't be too hard automatically recover the original data back to the starting location.

It'll take some more detailed thinking about when coming to implement it. 😄

@j-hellenberg
Copy link

Hey, is there any chance some backup option as proposed here will be added in the near future? We are considering to use this, but we are a little worried that something will go wrong eventually and we'd like to have some very quick rollback option available just in case 😃

@justinclift
Copy link
Member Author

@j-hellenberg What's the approach you're currently using for backing up the database? 😄

@justinclift
Copy link
Member Author

@j-hellenberg Oh, as another relevant data point, how big are your databases on disk generally, and do they generally have enough free space on their disk partitions for 2 full copies of the database?

@j-hellenberg
Copy link

@justinclift Currently, we are performing full backups of our Kubernetes Persistent Volumes once per week. Losing up to a week's worth of data is manageable, though, obviously, far from ideal. So far, we got lucky and did not have to use one of these backups.

Our database sizes are still pretty manageable, I think the biggest is about 1GB in size on disk. The data volumes are all large enough to support two full database copies. Still, I think it makes sense for pgautoupgrade to allow specifying a directory for a backup via a configuration option as you proposed. In our case, we could then easily mount another volume into the container to be used for backups if we wanted to.

Side note: You mentioned that you only wanted to use a directory for backups if it is empty. There are certainly use cases where it makes sense to do it like this, but I just thought of another option, just as an idea: We could allow overwriting an old backup if the current version Y of the database is newer than the database version X of the backup. In this case, we are essentially already one upgrade step ahead (upgrading from Y to Z now), meaning we can be fairly certain by now that the last upgrade must have succeeded. In that case, we can safely remove the old backup of version X. Doing it like that would remove the manual effort of having to manually delete the old backup before each new upgrade.

@j-hellenberg
Copy link

@justinclift Just a friendly reminder, is there any update on this? :)

@justinclift
Copy link
Member Author

Not yet. 😬

@spwoodcock
Copy link

Does recommending a multi-step approach like this make sense?

https://github.com/pgautoupgrade/docker-pgautoupgrade/wiki/Automating-The-Upgrade

The user can simply add a backup stage before the upgrade as part of the startup order (whether docker compose, Kubernetes, etc).

This way they have the flexibility to do a pgdump, directory copy, upload to object storage, copy to bind mount dir, whatever they want (using whatever container they wish too).

It's a tradeoff between creating and maintaining backup options as part of this container, or having pgautoupgrade only do one function: upgrade postgres.

We could have a wiki article about creating a backup prior to the upgrade.

It reality it is quite helpful to have at least some basic backup and rollback capability built in.

So perhaps a minor suggestion to your original proposal: why not create the backup in the PG_DATA dir to save the additional bind mount? As you say:

  • An environment variable to perform a backup first.
  • Simply copy the pg directory to a .backup dir or similar, within PG_DATA.
  • (optional) part 2 we deal with some form of restore, where if the script fails anywhere then we copy the data from the backup back to the original directory.

@justinclift
Copy link
Member Author

Simply copy the pg directory to a .backup dir or similar, within PG_DATA.

This seems like a decent idea. We'd want to do a disk space check first too, to ensure there's enough free space to copy the directory.

In theory that shouldn't be too hard to check, probably some du and df commands I guess. 😄

@wuast94
Copy link
Member

wuast94 commented Sep 30, 2024

i dont really like the empty folder idea. i would create a folder in that backup dir based on the upgrade step. so something like 14_15 if we upgrade from 14 to 15.

Space shouldnt really be a concern in my opnion, you need a really huge db on a very small system ← should be a very rare combo. also checking available space should be very easy to add before doing an backup, just make sure to not do an upgrade if the user wants an backup without enough space.

@justinclift
Copy link
Member Author

justinclift commented Oct 1, 2024

so something like 14_15 if we upgrade from 14 to 15.

Yeah, something along those lines seems like a decent idea. That should also help to not overwrite things if the database automatically upgrades several releases over time too.

We could probably make that folder name even more descriptive too. Maybe a timestamp and extra user friendly text?

2024.02.07-12.08-backup_prior_to_pg_13_to_14_upgrade
2024.10.01-14.13-backup_prior_to_pg_14_to_15_upgrade

Space shouldnt really be a concern in my opnion, you need a really huge db on a very small system

Nah, I've literally seen this kind of problem happen on production systems before.

For example, on a long running database host it's starting to get worryingly close to the upper safety thresholds of disk usage before needing to figure out a path for upgrading the host.

Also, things like simple mistakes (running a backup accidentally goes to the wrong file system) can cause the same kind of failure mode too.

That being said, we only need to check that stuff when the user has indicated they want a backup to happen. 😄

@spwoodcock
Copy link

spwoodcock commented Oct 1, 2024

I like the idea of a very descriptive directory 👍

Implementation so far based on discussion:

  1. Update the entrypoint script to run additional logic prior to the upgrade commencing.
  2. If an environment variable like PGAUTO_BACKUP=1 is present, then perform the backup.
  3. First check if the device has necessary space for a backup.
    a. Check the size of the PG_DATA dir using du -sh $PG_DATA.
    b. Check remaining space on device containing PG_DATA using df $PG_DATA.
    c. Compare the two and continue if there is enough space.
    d. Else exit with an error message and code.
  4. Create a directory in a format like 2024.10.01-14.13-backup_prior_to_pg_14_to_15_upgrade inside PG_DATA (to avoid requiring a second bind mount or volume).
  5. Copy the contents of the existing PG_DATA dir to the newly created dir.
  6. Continue the upgrade.
  7. (Optional) If the upgrade fails at any point, trap exit codes to then rollback the upgrade (copy the new directory contents back to PG_DATA.

Let me know if I missed anything or if there are additional ideas 😃

I might get some time to take a stab at this in the coming months, but otherwise anyone feel free to tackle it in the meantime 🙏

@justinclift
Copy link
Member Author

Yep, that sounds like the right kind of thing. Unexpected edge gotchas may turn up while doing the actual implementation (it happens), but as a reasonable plan that looks fine.

With the du and df, I'm not sure if those will turn out to be the best options. Looking at the man pages for them on my Debian 12 based system they don't seem to have a specific "machine readable" output mode (nor json output), though that's not necessarily a blocker if the output format is stable. 😄

@wuast94
Copy link
Member

wuast94 commented Oct 1, 2024

also we should make it atomic, im thinking about a cluster when a node gets offline the cluster starts the container on another node, if the first node comes back online it would stil have the container running until the cluster manges it. so we should make sure that never a second container would write or modify the dir another potential container does things.

timestamps could couse an edge case where a container gets constantly restartet because of a failure and would create endless dirs and potentially fill up disk space, so we should have such edge cases in mind too

@justinclift
Copy link
Member Author

so we should make sure that never a second container would write or modify the dir another potential container does things.

Isn't that something it's up to the container management framework to ensure? ie that if our container is mounted and running, it's not going to be messed around with anything / or mess anything else up

Doesn't really seem like the job of our container to ensure there aren't other cluster members still running (etc).

Unless I'm misunderstanding the example scenario? 😄

@j-hellenberg
Copy link

  1. Create a directory in a format like 2024.10.01-14.13-backup_prior_to_pg_14_to_15_upgrade inside PG_DATA (to avoid requiring a second bind mount or volume).

I think it makes sense to have this as a PG_BACKUP_DIR config option with default value PG_DATA as PG_DATA seems to be a sensible default, but I can still imagine people wanting to specify something different there. If they want to use a separate volume, they can just do so and specify the directory according to the mount path.

c. Compare the two and continue if there is enough space.
d. Else exit with an error message and code.

Also, with the proposed implementation, I'm still not sure whether it is ideal that successive upgrades will continue to fill up the disk space with multiple parallel copies of the database. For example, when we upgrade from 12 to 16 in steps, we end up with the main data and four full copies on top (12_13, 13_14, 14_15, 15_16). Maybe it's fine, as the administrator will eventually be forced to clean up once disk space gets too low, just wanted to bring it up again.

@justinclift
Copy link
Member Author

For example, when we upgrade from 12 to 16 in steps, we end up with the main data and four full copies on top (12_13, 13_14, 14_15, 15_16).

It's probably a better idea to do that rather than overwrite a previous backup.

If we go with the assumption that PG_BACKUP_DIR has to be manually set by the people using the container, then we should really leave the management of the backup space to them as well.

If someone using the container sets the option but doesn't ensure they have enough space before hand, that's up to them to resolve. I mean, we should still play nice with things (ie detect and warn/error/etc), but lets not over think it? 😄

@wuast94
Copy link
Member

wuast94 commented Oct 17, 2024

Doesn't really seem like the job of our container to ensure there aren't other cluster members still running (etc).

Unless I'm misunderstanding the example scenario? 😄

in theory yes, but its some kind of edge case. and on the other hand my healthcheck implements this already (need to be modified a bit i think for the backup thing).
Could also be used to not run a loop if the container crashes all the time while backup is running.

But you are right that the orchestrator should not allow this but i mean i had this case with my swarm cluster.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants