-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat: Build container in Zuul #31
base: main
Are you sure you want to change the base?
Conversation
.zuul.yaml
Outdated
- secret: | ||
name: status-page-registry-credentials | ||
data: | ||
registry.scs.community: !encrypted/pkcs1-oaep |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, we would be managing registry credentials as well as required scanning/sbom building centrally
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When can we expect these changes to be usable?
Removed the tag job, was we never used the tagging mechanism before, when releasing a new version. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using commit_id may be tricky and not as useful as you think. Reason for this is that depending on the merge type commit hash as such may never appear in the main branch (squash vs merge vs rebase). Using PR as a reference is more reliable (pr link is already present in the image labels as org.zuul-ci.change_url)
if you skip tags as such automatically "latest" is added (https://opendev.org/zuul/zuul-jobs/src/branch/master/roles/build-container-image/tasks/build.yaml#L25). If you use something to how zuul is defining tags (https://opendev.org/zuul/zuul/src/branch/master/.zuul.yaml#L287) you'll automatically get proper tags also in the case when you put your job into the tag pipeline
.zuul.yaml
Outdated
- context: | ||
<<: *status-page-build-container-image | ||
tags: | ||
- latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding "latest" tag in upload but not in build makes no sense and makes job structure more complex. Generally there is no sense in using different tags/labels between jobs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having them separated was a little oversight, fixing it. Wanted to have the possibility to easily set new tags. But I will use same config for both now.
Zuuls way of tagging relies on the usage of SemVer. We never talked about versioning and releasing the status-page-api or any other status page component. I am not opposed to defining a versioning and release system based on SemVer for the status page. But I still need a more automated way of building a container image, after each PR, as some features are better tested in the local dev environment or test deployment, before "releasing" a versioned image. In general, the way of promoting an image from a commit sha tag to a version tag, would be preferable, but using the opendev job which used skopeo may introduce new complexities regarding the secrets. At this point, I do have a question @gtema : Do I need to configure anything else, regarding the secrets? I do have a robot account on harbor for the status page project. Do I need to share these secrets on the Zuul instance, if yes, how? |
I just referred at Zuul model for example how it could be used (disregarding of the chosen versioning model) to have single definition of tags working for regular PR and tagged releases
Ideally you should be having functional tests here in the project. Local testing is never a better choice. But that is a different story and not in scope.
promote job upstream is used more to copy image between registries (opendev often uses interim registry where artifacts from different projects are pushed to be tested together (cross-project functional testing). In that model there might be step of "promoting" image from interim registry to the general one (and for copying skopeo is being used). Otherwise normal container build/upload is being used.
No, you do not need to care about anything as long as your job is pushing to registry.scs.community. For other registry zuul admin would need to allocate new robot account in the target registry, put it in vault and update job to be supporting new registry as well. For you now there is no need to do anything about that |
.zuul.yaml
Outdated
@@ -10,6 +10,8 @@ | |||
golangci_lint_version: 1.59.1 | |||
go_version: 1.22.4 | |||
golangci_lint_options: --timeout 5m | |||
provides: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you do not really need provides/requires in jobs here. In Zuul it means something different and used for requirements between different projects and not between jobs in the same project (https://zuul-ci.org/docs/zuul/latest/config/job.html#attr-job.requires)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Misunderstood the usage of it, we were thinking of setting an order, in witch the jobs run, to save on nodes.
We don't care if tests succeed, if the linting fails. We don't care for building, if testing fails, etc.
Is there a way of doing such a workflow?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, there is: https://github.com/opentelekomcloud-infra/system-config/blob/main/zuul.d/project.yaml#L68 or https://opendev.org/openstack/codegenerator/src/branch/master/zuul.d/openapi.yaml#L282 as example (https://zuul-ci.org/docs/zuul/latest/config/job.html#attr-job.dependencies for reference). Building it this way, though, you make your CI longer. You generally should not do this unless jobs are very long running.
It makes much more sense to use pods for running lightweight jobs (https://github.com/SovereignCloudStack/zuul-scs-jobs/blob/main/zuul.d/jobs.yaml#L35) with nodeset: pod-fedora-40
. Such jobs are being executed in K8 and do not require nodes to be provisioned. There are few cases where this should not be used:
- image building (docker in docker never worked good)
- very heavy jobs (requiring more cpu/mem).
- very long running jobs (networking bugs on k8 cause Zuul loose control over those)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems nice, instead of waiting for the nodes.
Is there a list, of node sets and labels, that are currently usable, documented somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so do you plan to drop provides/requires? The only sense to use those would be in combination between status-page-api and status-page-web to build dependencies between projects
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dropped the provides and requires in jobs
The project is automatically tested. I meant to try out, like a test deployment with all components around it, like the local KinD deployment. I would prefer, to do it on a per feature (after PR) basis, then tagging and "releasing" a new version, with possible multiple new features. So I can, for example, get a new feature up in local or test deployment, so someone else can implement the corresponding frontend changes, as they depend on a working backend. So a working status page full package can be released. I could stick to building each PRs container image locally and pushing to harbor manually, but then this whole endeavor would be in vain.
Still a promotion workflow seems more desirable to me. I want to re-tag the last container containing the recent changes, to mark it as "released" under a given version. Rebuilding the container, even with the same code from the same commit, will result in different hashes for the layers, and thus will be regarded as another container image, that overwrites exciting container images/tags. |
And this is easily possible if you include a tag with PR number (like status-page-api:change_31 in case of this PR). Once you merge PR the image is being uploaded and pointed by tag latest and change_31 giving you possibility to test everything what you need. But also as I mentioned above - the case of testing how multiple projects incorporate together is precisely what interim registry is for.
Principle of immutability forbids changing tags and I am not aware of any possibility to achieve that with docker/podman directly. Some registries allows this using "proprietary" api only but in every case it is meant a bad tone. Skopeo would also do a copy resulting in change of all hashes. That is precisely a reason why you should be having sufficient tests in CI so that when you say you want to tag new release you are confident in it. At the moment I have neither time nor desire to fiddle with harbor apis to find how to achieve image retagging, so I am not going to implement it in the next future. |
Adding a new tag to an existing image is easily done with podman pull <image>:<old-tag>
podman tag <image>:<old-tag> <image>:<new-tag>
podman push <image>:<new-tag> That's exactly what I use in the In CIs for example, where you can not run docker or podman in another container, crane tag <image>:<old-tag> <new-tag> This is no "far of" use case or something hacky. It's a built in functionality in most tools used to manage container images. After a PR merge, there is a new commit hash -> Use this commit hash as tag for the new image. After a while, when you want to release one or more changes, you tag the repository -> there is already an image with the last commit hash as tag, having all the changes -> re-tag with tag Zuul seems to over complicate this, as it is at most 2 lines of bash in every other CI/CD I've ever used. (ignoring credential handling) Edit: I could confirm that even
As you can see in the harbor status-page-api registry, there are now 3 tags for the same image. |
it's not about Zuul making it anyhow different, but about the fact that you typically should not do this in CI. If you still have such need - feel free to do this manually in harbor and any point in time. |
Why shouldn't I do this in CI? We're using CI, in this case Zuul, as automation for workflows we do not want to do manually. I would like to understand how this workflow differs from, for example running the lint and testing in Zuul? Especially considering that the docs for the opendev container jobs, indicate exactly this workflow: * :zuul:job:`build-container-image` in `check`
* :zuul:job:`upload-container-image` in `gate`
* :zuul:job:`promote-container-image` in `promote` with
``promote_container_method: tag`` opendev - container image playbooks We're just using different GH pipelines, but the workflow is the same. I do not expect you to implement any changes to Zuul, harbor, jobs or anything else. I just want to understand your reasoning behind it. Edit: The above example, especially handles the case of not using an intermediate registry. |
@gtema there is still no way to use a promotional workflow like the Zuul example workflows? I think rebuilding the same container, instead of re-tagging, is wasteful and the inconsistencies in the image hashes is very undesirable. Edit: I finally see the problem you've described. The container is build inside of a pull request. Thats the problem. It must be build after the pull request was merged. Then, on a tag, we can re-tag the commit id container to a version container. |
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
Signed-off-by: Joshua Irmer <[email protected]>
b543e19
to
28099a2
Compare
Signed-off-by: Joshua Irmer <[email protected]>
28099a2
to
330c963
Compare
Not publishing tag before the PR got merged is exactly to prevent tag replacement. Publishing container on every commit is not a sane development workflow in my eyes. |
That's not what I planned to do, maybe this is where our discussion derailed. AFAIK Zuul's docs recommend this workflow:
This is the workflow they call "Promotion via tags" I want to modify the workflow to basically not build on a tag or release, just to promote again. Basically:
I did go down the rabbit hole of the original Zuul jobs. AFAIK right now, this should be possible. My main concern right now is, there are only build and upload jobs for scs in the zuul container jobs. Another way would be to still use the check pipeline and a container build as check if the container can be build. Not upload any post PR merge containers, only a fully versioned container on a tag. |
- build a container in check pipeline to check if build is possible - build a tagged container on project tag/release Signed-off-by: Joshua Irmer <[email protected]>
could be
wrong - there is no upload to public repo happening before the merge is triggered. There is a flow that uploads image to the interim registry which may be used when few projects need to access each other so that you have possibility to test such artifacts automatically (this is not public)
wrong. Part of the gate the is an upload job which pushes change_<PR_NUM>latest tag of the image. The promote job (which runs only when change successfully merges) renames change<PR_NUM>_latest to whatever is the "regular" tag (i.e. latest)
upon tagging a new image is being build. Well, it depends on which job you use but this is typically the case.
Once PR is merged and post pipeline completes Zuul has no access an no info about previously uploaded images. Therefore you can not implement something like: on tag take last uploaded container and retag it. This is the same for GitHub actions and is basically dictated by the PR workflow. Please explain what do you understand under "promote"
jobs:
P.S. there is huge difference between gate and merge from the GitHub pov in that Zuul merges PR once gate pipeline completes. If you bypass zuul merging and do it manually in GH gate pipeline is not triggered and the whole workflow looks all of a sudden differently |
|
Use Zuul to build new containers.
scs-status-page-container-build
incheck
: Build the whole application on every new PR, to verify build and container build is working.scs-status-page-container-push
inpost
: Build and push the container image to the registry, withlatest
andcommit_id
(commit hash) when the PR was merged.scs-status-page-container-tag
intag
: Retag the container image with a given tag.Furthermore the
scs-status-page-go-build
job was removed, as theContainerfile
is a multi stage file, that builds the binary itself.This marks the go build GitHub action obsolete, too.
Closes: #15
Closes: #29