-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tracker for "physically bound" containers #644
Comments
Maybe additional image stores can play a role here. Images are read-only. |
OK people are going to keep doing Something like But probably this command could: query the storage config and take the first AIS location in Also, something we should definitely do here is canonicalize all the timestamps in the image store to help with reproducible builds. (Today of course, containers/buildah#4242 rains on that parade, but we can prepare for the day it gets fixed...) |
I am undecided. It would be a very opinionated way of pulling/embedding images. If users wanted more flexibility, we'd might end up with similar flags to Podman. In that case, users could very well just use Podman.
Reading the configuration of another tool is somehow risky. I would prefer if |
Additional image stores are currently not correctly reported by |
Opened containers/storage#2094 which will soon get addressed. |
Yeah I'd agree in the end fixes here should live in podman.
The conf isn't wholly owned just by podman, I think the risk is more "parse it without using the c/storage Go code" right? (This isn't like the quadlet files which do only have parsing code that lives in podman) |
That plus making sure that bootc and podman always use the same version of c/storage. If a new feature was added to the parsing code and the two tools differ, we can run into trouble. |
That problem already exists today though with all the other things consuming c/storage, including buildah and skopeo but most notably crio which has a different release cadence - and did break in practice a while back. This discussion may seem like a digression but I think it's quite relevant as we look towards unified storage for example and how that would work. |
When running
The errors are sporadic and the build process built-in retry may make one/two steps forward.
|
No idea honestly...when reporting errors like this it'd be super useful to know details like the environment (virt/physical, OS version, environment in general (is there any nested containerization going on?)) Just searching for the error string turns up containers/podman#10135 It's not even obviously clear to me whether the error message is from the outer build process or the inner |
I think It's from the inner Adding podman debug logs as a follow-up. Does the last error give any clues about the possible problem?
|
…1.0.198 build(deps): bump serde from 1.0.197 to 1.0.198
export: Add progress flag
Splitting this out from #128 and also from CentOS/centos-bootc#282
What we want to support is an opinionated way to "physically" embed (app) containers inside a (bootc) container.
From the UX point of view, a really key thing is there is one container image - keeping the problem domain of "versioning/mirroring" totally simple.
There's things like
as one implementation path. A lot of sub-issues here around nested overlayfs and whiteouts.
I personally think what would work best here actually is a model where we have an intelligent build process that does something like this - basically we should support a flow that takes the underlying layers (tarballs), and renames all the files to prefix with
/usr/share/containers/storage/overlay
or so, plus a bit that adds all the metadata as a final layer - this would help ensure that we never re-pull unchanged layers even for "physically" bound images.IOW it'd look like
The big difference between this and
RUN podman --root pull
is that inherently that is going to result in a single "physical" layer in the bootc image, even if the input container image has multiple layers.A reason I argue for this is that inherently
RUN podman pull
is (without forcing on stuff likepodman build --timestamp
going to be highly subject to "timestamp churn" on the random json files that podman creates, and that is going to mean every time the base image changes the client has to download these "physically embedded" images, even if logically they didn't change. Of course there's still outstanding bugs like containers/buildah#5592 that defeat layer caching in general.However...note that this model "squashes" all the layers in the app images into one layer in the base image, so on the network, e.g. the base image used by an app changes, it will force a re-fetch of the entire app (all its layers), even if some of the app layers didn't change.
In other words, IMO this model breaks some of the advantages of the content-addressed storage in OCI by default. We'd need deltas to mitigate.
(For people using ostree-on-the-network for the host today, this is mitigated because ostree always behaves similarly to zstd:chunked and has static deltas; but I think we want to make this work with OCI)
Longer term though, IMO this approach clashes with the direction I think we need to take for e.g. configmaps - we really will need to get into the business of managing more than just one bootable container image, which leads to:
The text was updated successfully, but these errors were encountered: