Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Velero CRD upgrade job failures #559

Closed
jkoop144 opened this issue Mar 19, 2024 · 9 comments
Closed

Velero CRD upgrade job failures #559

jkoop144 opened this issue Mar 19, 2024 · 9 comments

Comments

@jkoop144
Copy link

What steps did you take and what happened:
When upgrading the helm chart for Velero, a job is created to upgrade the CRDs for Velero as well. Every deployment we have failures for the CRD job stating that:

/tmp/sh: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/sh)
/tmp/sh: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /tmp/sh)

This comes when the job is starting on an amd64 based node, even though it seems like its looking for aarch64 (ARM). This is happening to all of our helm upgrades for Velero across all of our clusters (each have some different configuration).

What did you expect to happen:

The Velero CRD upgrade job to finish properly and the helm upgrade complete.

The following information will help us better understand what's going on:

Using helm chart version: 3.1.0

Environment:

  • Velero version (use velero version): 1.10.0
  • Velero features (use velero client config get features):
  • Kubernetes version (use kubectl version): "1.26"
  • Kubernetes installer & version: Helm chart version 3.1.0
  • Cloud provider or hardware configuration: AWS EKS
  • OS (e.g. from /etc/os-release): Amazon Linux 2

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@qiuming-best qiuming-best self-assigned this Mar 20, 2024
@qiuming-best qiuming-best transferred this issue from vmware-tanzu/velero Mar 20, 2024
@qiuming-best
Copy link
Collaborator

qiuming-best commented Mar 20, 2024

Which Velero version do you want to upgrade to?

How is your configuration for the lines here?
Have you reconfigured the initContainers images for the upgrade job?

@jkoop144
Copy link
Author

We are not actually upgrading the version of Velero, but just changes some values to be applied. The change in helm values triggers the CRD upgrade.

For the initContainer we are using:

initContainers:
  - name: velero-plugin-for-aws
    image: (companyInternalArtifactory)/velero/velero-plugin-for-aws:v1.5.2
    imagePullPolicy: IfNotPresent

and our kubectl setup is similar with:

kubectl:
  image:
    repository: (companyInternalArtifactory)/bitnami/kubectl
    tag: "${kubectl_version}"

@qiuming-best
Copy link
Collaborator

@jkoop144
I've tried it with bitnami/kubectl:latest and it's ok.
Maybe the tmp/sh binary in your image is not a static compilation.

@Lirt
Copy link
Contributor

Lirt commented Mar 25, 2024

Hello,

Same issue for me.

My build should be reproducible. I pinned chart version version to 4.0.1 and re-run pipeline after some time (with no changes to values). The upgrade CRD job is failing the same way:

$ kubectl logs velero-upgrade-crds-km5kq -c velero 
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/sh)
/tmp/sh: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by /tmp/sh)

Job is templated to correct image with correct SHA:

       containers:
       - args:
         - -c
         - /velero install --crds-only --dry-run -o yaml | /tmp/kubectl apply -f -
         command:
         - /tmp/sh
         image: velero/velero:v1.11.1@sha256:fabf2d40f640019aed794f477013ec03f2a4b91e3f5aa80f9becdd8d040c5c6b

DockerHub says the image was pushed 8 months ago. But could it be changed during that time? I didn't bind image to SHA.

I tried to run job with tag 1.12.4 and that works. So the issue is in both 1.11.0 and 1.11.1.

Lirt added a commit to Lirt/velero-plugin-for-openstack that referenced this issue Mar 25, 2024
Integration test is always freshly executed, therefore doesn't need to
run Velero job to upgrade CRDs.

Upgrade CRDs job has an unconfirmed bug that prevents CI to success (vmware-tanzu/helm-charts#559)

Signed-off-by: Ondrej Vasko <[email protected]>
Lirt added a commit to Lirt/velero-plugin-for-openstack that referenced this issue Mar 25, 2024
Integration test is always freshly executed, therefore doesn't need to
run Velero job to upgrade CRDs.

Upgrade CRDs job has an unconfirmed bug that prevents CI to success (vmware-tanzu/helm-charts#559)

Signed-off-by: Ondrej Vasko <[email protected]>
@qiuming-best
Copy link
Collaborator

qiuming-best commented Mar 26, 2024

@Lirt thanks for your further details information.
Yes, I last tried 1.13.0 and it's ok
But it has the above problem with v1.9.0, v1.10.0, v1.11.0, v1.12.0.

@jkoop144
Copy link
Author

Is there a way to target 1.13.0 for the update crds job in the helm chart values so that I could bypass the versions containing the issue?

@jenting
Copy link
Collaborator

jenting commented May 11, 2024

@qiuming-best we have released helm chart with velero v1.13.0.
Is this issue still present?

@kosmoz
Copy link

kosmoz commented May 14, 2024

Upgrading to version 6.1.0 of the chart fixed this issue for me.

@jenting
Copy link
Collaborator

jenting commented May 15, 2024

Thanks for the info, closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants