Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

${pvc.metadata.name} returns the pvc's UID, not it's logical name #747

Open
jnm27 opened this issue Aug 26, 2024 · 13 comments
Open

${pvc.metadata.name} returns the pvc's UID, not it's logical name #747

jnm27 opened this issue Aug 26, 2024 · 13 comments

Comments

@jnm27
Copy link

jnm27 commented Aug 26, 2024

What happened:
In the subdir field of the storage class, I am using
subDir: ${pvc.metadata.namespace}/${pvc.metadata.name}

The namespace is correct, but the name returns "prime-$UID", such as:
prime-968efac8-99c2-430d-8731-7714e424ad44

This gives no way of identifying the disk on the NFS server, especially if the server on which the PVC was originally created was lost.

The use case here is that a user could either re-attach a PVC, or delete the PVC, when re-installing a host and re-creating VMs under that host using NFS-backed storage.

Environment:

  • CSI Driver version: 4.7.0 (openshift fork)
  • Kubernetes version (use kubectl version): v1.29.6
  • OS (e.g. from /etc/os-release): Red Hat Enterprise Linux CoreOS 416.94.202407171205-0
  • Kernel (e.g. uname -a): Linux spt01.gl1.tfdm.nas.faa.gov 5.14.0-427.26.1.el9_4.x86_64 SMP PREEMPT_DYNAMIC Fri Jul 5 11:34:54 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@andyzhangx
Copy link
Member

then what's incorrect? ${pvc.metadata.name}? what's the value of subDir you want to create?

@jnm27
Copy link
Author

jnm27 commented Aug 28, 2024

pvc.metadata.name should be the name, not the uid.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/allowClaimAdoption: "true"
    cdi.kubevirt.io/createdForDataVolume: 85538371-4b3c-49a6-b226-fb2cfbb17aad
    cdi.kubevirt.io/storage.condition.running: "false"
    cdi.kubevirt.io/storage.condition.running.message: Import Complete
    cdi.kubevirt.io/storage.condition.running.reason: Completed
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.pod.phase: Succeeded
    cdi.kubevirt.io/storage.pod.restarts: "0"
    cdi.kubevirt.io/storage.populator.progress: 100.0%
    cdi.kubevirt.io/storage.preallocation.requested: "false"
    cdi.kubevirt.io/storage.usePopulator: "true"
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
    volume.kubernetes.io/storage-provisioner: nfs.csi.k8s.io
  creationTimestamp: "2024-08-26T22:41:54Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    alerts.k8s.io/KubePersistentVolumeFillingUp: disabled
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
    app.kubernetes.io/part-of: hyperconverged-cluster
    app.kubernetes.io/version: 4.16.0
    kubevirt.io/created-by: 6764808e-f2ed-49b6-bfb0-76ad746363a8
  name: vm01-root
  namespace: default
  ownerReferences:
  - apiVersion: cdi.kubevirt.io/v1beta1
    blockOwnerDeletion: true
    controller: true
    kind: DataVolume
    name: vm01-root
    uid: 85538371-4b3c-49a6-b226-fb2cfbb17aad
  resourceVersion: "24897"
  uid: 152dfdb4-599d-4643-b52f-57212cdd8554

${pvc.metadata.namespace}/${pvc.metadata.name} uninitutively yields default/prime-152dfdb4-599d-4643-b52f-57212cdd8554 here instead of default/vm01-root

@andyzhangx
Copy link
Member

andyzhangx commented Aug 28, 2024

can you share the /CreateVolume related logs in csi driver controller pod? from our e2e tests, "subDir":"${pvc.metadata.namespace}/${pvc.metadata.name}" is parsing correctly:

[pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.727919 1 utils.go:110] GRPC call: /csi.v1.Controller/CreateVolume
[pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.727939 1 utils.go:111] GRPC request: {"capacity_range":{"required_bytes":10737418240},"name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","parameters":{"csi.storage.k8s.io/pv/name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","csi.storage.k8s.io/pvc/name":"pvc-h9j2r","csi.storage.k8s.io/pvc/namespace":"nfs-6704","mountPermissions":"0755","onDelete":"archive","server":"nfs-server.default.svc.cluster.local","share":"/","subDir":"${pvc.metadata.namespace}/${pvc.metadata.name}"},"secrets":"stripped","volume_capabilities":[{"AccessType":{"Mount":{}},"access_mode":{"mode":7}}]}
...
[pod/csi-nfs-controller-74fc79867-m6s42/nfs] I0827 23:53:55.783217 1 utils.go:117] GRPC response: {"volume":{"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea","csi.storage.k8s.io/pvc/name":"pvc-h9j2r","csi.storage.k8s.io/pvc/namespace":"nfs-6704","mountPermissions":"0755","onDelete":"archive","server":"nfs-server.default.svc.cluster.local","share":"/","subDir":"nfs-6704/pvc-h9j2r"},"volume_id":"nfs-server.default.svc.cluster.local##nfs-6704/pvc-h9j2r#pvc-424e942c-486c-4f6c-bd3a-6440a42b53ea#archive"}}

@jnm27
Copy link
Author

jnm27 commented Aug 28, 2024

Here are the equivalent logs:

I0828 15:11:40.274597       1 utils.go:109] GRPC call: /csi.v1.Controller/CreateVolume
I0828 15:11:40.274621       1 utils.go:110] GRPC request: {"capacity_range":{"required_bytes":136348168127},"name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","parameters":{"csi.storage.k8s.io/pv/name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0","csi.storage.k8s.io/pvc/namespace":"default","server":"serverip","share":"/volumename","subDir":"procname/${pvc.metadata.namespace}/${pvc.metadata.name}"},"volume_capabilities":[{"AccessType":{"Mount":{"mount_flags":["nfsvers=4.1"]}},"access_mode":{"mode":5}}]}
:
:
GRPC response: {"volume":{"volume_context":{"csi.storage.k8s.io/pv/name":"pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661","csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0","csi.storage.k8s.io/pvc/namespace":"default","server":"serverip","share":"/volumename","subDir":"procname/default/prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0"},"volume_id":"serverip#volumename#procname/default/prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0#pvc-3c9aeec9-39e0-45ee-ad2a-0cd1c6b3d661#"}}

Reminder that this is the openshift fork (https://github.com/openshift/csi-driver-nfs); do you think this is a problem with that fork or with openshift itself instead of the upstream here?

@andyzhangx
Copy link
Member

the pvc name passed to csi driver is "csi.storage.k8s.io/pvc/name":"prime-3cbee8d9-0d84-4df7-820e-67fee7f20ac0"

@jnm27
Copy link
Author

jnm27 commented Aug 29, 2024

Right. Where does that come from?

@andyzhangx
Copy link
Member

Right. Where does that come from?

@jnm27 it's injected by the csi-provisioner

@jnm27
Copy link
Author

jnm27 commented Aug 29, 2024

Ok... who provides the csi-provisioner? What's the fix? Sorry for all the questions

@andyzhangx
Copy link
Member

it's https://github.com/kubernetes-csi/external-provisioner

--extra-create-metadata: Enables the injection of extra PVC and PV metadata as parameters when calling CreateVolume on the driver (keys: "csi.storage.k8s.io/pvc/name", "csi.storage.k8s.io/pvc/namespace", "csi.storage.k8s.io/pv/name")

@jnm27
Copy link
Author

jnm27 commented Aug 29, 2024

well, I tried taking out the --extra-create-metadata argument from the provisioner deployment spec, and it made it so the variables don't get replaced at all.

├── ${pvc.metadata.namespace}
│   └── ${pvc.metadata.name}
│   └── disk.img

You mean I should open a ticket at https://github.com/kubernetes-csi/external-provisioner instead?

@andyzhangx
Copy link
Member

can you try https://github.com/kubernetes-csi/csi-driver-nfs project? at least from e2e test logs, this project works.

@jnm27
Copy link
Author

jnm27 commented Sep 2, 2024

I see the same behavior with this project 4.8.0.

@jnm27
Copy link
Author

jnm27 commented Sep 30, 2024

@andyzhangx any other thoughts? Do you think it's something in particular to how Openshift is interfacing with the CSI driver, and I should open a Redhat ticket instead? It would be really useful to pinpoint what exactly Openshift is doing wrong from the perspective of this driver before going to them, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants
@andyzhangx @jnm27 and others