-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Velero CRD upgrade job failures #559
Comments
Which Velero version do you want to upgrade to? How is your configuration for the lines here? |
We are not actually upgrading the version of Velero, but just changes some values to be applied. The change in helm values triggers the CRD upgrade. For the initContainer we are using:
and our kubectl setup is similar with:
|
@jkoop144 |
Hello, Same issue for me. My build should be reproducible. I pinned chart version version to 4.0.1 and re-run pipeline after some time (with no changes to values). The upgrade CRD job is failing the same way: $ kubectl logs velero-upgrade-crds-km5kq -c velero
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/sh)
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /tmp/sh) Job is templated to correct image with correct SHA: containers:
- args:
- -c
- /velero install --crds-only --dry-run -o yaml | /tmp/kubectl apply -f -
command:
- /tmp/sh
image: velero/velero:v1.11.1@sha256:fabf2d40f640019aed794f477013ec03f2a4b91e3f5aa80f9becdd8d040c5c6b DockerHub says the image was pushed 8 months ago. But could it be changed during that time? I didn't bind image to SHA. I tried to run job with tag 1.12.4 and that works. So the issue is in both 1.11.0 and 1.11.1. |
Integration test is always freshly executed, therefore doesn't need to run Velero job to upgrade CRDs. Upgrade CRDs job has an unconfirmed bug that prevents CI to success (vmware-tanzu/helm-charts#559) Signed-off-by: Ondrej Vasko <[email protected]>
Integration test is always freshly executed, therefore doesn't need to run Velero job to upgrade CRDs. Upgrade CRDs job has an unconfirmed bug that prevents CI to success (vmware-tanzu/helm-charts#559) Signed-off-by: Ondrej Vasko <[email protected]>
@Lirt thanks for your further details information. |
Is there a way to target 1.13.0 for the update crds job in the helm chart values so that I could bypass the versions containing the issue? |
@qiuming-best we have released helm chart with velero v1.13.0. |
Upgrading to version 6.1.0 of the chart fixed this issue for me. |
Thanks for the info, closing it. |
What steps did you take and what happened:
When upgrading the helm chart for Velero, a job is created to upgrade the CRDs for Velero as well. Every deployment we have failures for the CRD job stating that:
This comes when the job is starting on an amd64 based node, even though it seems like its looking for aarch64 (ARM). This is happening to all of our helm upgrades for Velero across all of our clusters (each have some different configuration).
What did you expect to happen:
The Velero CRD upgrade job to finish properly and the helm upgrade complete.
The following information will help us better understand what's going on:
Using helm chart version: 3.1.0
Environment:
velero version
): 1.10.0velero client config get features
):kubectl version
): "1.26"/etc/os-release
): Amazon Linux 2Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: