-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ephemeral invocations #166
Comments
While not being packaged in a feature, you can do something like this already, which should improve the situation: # create base pot
pot create -p immutable -t single -b 13.0
# snapshot and clone
pot snapshot -p immutable
pot clone -P immutable -p mutable
# start derived pot
pot start mutable
# change stuff in immutable
echo "Changed some things" >/opt/pot/jails/foundation/m/blabla
# resnapshot and create new clone
pot snapshot -p immutable
pot clone -P immutable -p mutable_new
# stop old clone and move new clone into place
pot stop -p mutable
pot rename -p mutable -n mutable_old
pot rename -p mutable_new -n mutable
pot start mutable
# destroy old clone
pot destroy -p mutable_old |
Thanks, that sounds like it's enough for what I need. I missed the |
I've now done this. It would be nice to have an atomic destructive rename that pot protected from concurrent clones, but I can work around this by wrapping the rename in a script that I run while holding the same lock file that I hold while doing the clone. |
This actually doesn't do quite what I need, because the cloned invocation is linked to the original and so I can't replace the base one without stopping the running ones (which I don't want to do, I want them to gracefully exit). A |
Is this a way of saying "I want to be able to rename a running pot"? |
p.s. Can't you simply use clone + unique jail names (e.g., using uuids)? That's what the nomad plugin does when invoking "pot prepare". |
That might work.
I was using UUIDs, but now it's some extra metadata I need to communicate and I can't use well-known names of the pots to check if they're running, send them signals, and so on. The UUID doesn't actually help here though. If I clone pot A to pot A-{UUID}, then I can't destroy pot A because the cloned dataset of A-{UUID} is dependent on A. I can fix that with an explicit |
But why would you want to destroy pot A? You can simply change/update/whatever in it and then do a new snapshot you can clone a new pot from (while keeping the old snapshot and running clones in place). Managing metadata is an extra burden for sure (but also not that hard). It’s all a bit theoretical without knowing more about what you’re actually trying to achieve. |
Because it's no longer required. To make things more concrete:
At the same time, I create a new base pot containing updated versions of compilers and things, and an updated base system with security vulnerabilities fixed. I want this to be picked up by the ephemeral pot as soon as it finishes running one job (I also prod it to exit if it's in the long-poll state and not currently running anything). As soon as the new base image is ready and the runner has finished, the base dataset is no longer required and should be deleted. If the ephemeral pot's dataset is promoted, this is trivial (ZFS handles the reference counting of any blocks that are still referenced by both). |
I would simply run a prune script for that :), but maybe @pizzamig has more inspiration/ideas? |
Hi everyone, sorry, I'm a bit late. If I understood it correctly, we have:
Do you have one ephemeral per base or multiple ephemeral per base? My observations:
|
I have a single ephemeral one, other uses cases would want multiple ones.
That's definitely what I'd have done 10-20 years ago but it's definitely not recommended practice for modern operations. Container deployments are supposed to be deterministically created from a declarative recipe, not continually evolving.
Yup, that's what I'm doing now, but I need to run a
That's what I was doing but rollback is a synchronous operation whereas destroying a clone can happen in the background. |
I don't understand the need to run a |
I've just installed and successfully start a runner using with your scripts.
In other words, you would need a way to recreate the base (with the same name), without shutting down the ephemeral pot. the |
It seems that you can rename the pot base while the ephemeral pot is running (the zfs origin is updated accordingly). So the upgrade process could be:
I will test it the entire process later this week (maybe submitting a PR to your project), but the |
Thanks. The |
Hi @davidchisnall, do you think it would make sense to revisit this requirement? (we made quite some progress structurally this year, so we might be in a better position to implement the feature now). |
Now that there's support for OCI containers on FreeBSD, I plan on moving my things over to that, so feel free to close this if no one else needs it. |
Is your feature request related to a problem? Please describe.
I want to run jobs in a throw-away jail that is reset to a previous state on exit. In most container systems, this is accomplished with an ephemeral layer over the top of a container image.
Describe potential alternatives or workaround you've considered (if any)
I currently wrap the
pot
invocation in a loop the rolls back to the previous snapshot each time. This isn't great for three reasons:Describe the feature you'd like to have
pot start
that cloned the filesystem, ran from the clone, and destroyed it at the end. If these clones live in a fixed part of the zpool namespace thenpot
can clean them up easily at the end.pot rename
command so that I can atomically replace the immutable base image when I upgrade.zfs quota
property for the ephemeral filesystem.The text was updated successfully, but these errors were encountered: