Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod cannot be deleted if the S3 PVC fails to mount #309

Open
geniass opened this issue Dec 5, 2024 · 3 comments
Open

Pod cannot be deleted if the S3 PVC fails to mount #309

geniass opened this issue Dec 5, 2024 · 3 comments

Comments

@geniass
Copy link

geniass commented Dec 5, 2024

/kind bug

NOTE: If this is a filesystem related bug, please take a look at the Mountpoint repo to submit a bug report

What happened?
Mounting an s3 volume into a pod fails (FailedMount k8s event). Then the kubelet is unable to delete the pod because the vol_data.json file is missing. The pod hangs in the Terminating state forever.

Kubelet logs get flooded with:

Dec 05 10:46:44 gin k3s[3936577]: E1205 10:46:44.663100 3936577 reconciler_common.go:158] "operationExecutor.UnmountVolume failed (controllerAttachDetachEnabled true) for volume \"s3-mybucket-storage\" (UniqueName: \"kubernetes.io/csi/s3.csi.aws.com^s3-csi-driver-volume-mybucket-storage\") pod \"ee13ebc2-e424-47da-aa81-c8f627f69782\" (UID: \"ee13ebc2-e424-47da-aa81-c8f627f69782\") : UnmountVolume.NewUnmounter failed for volume \"s3-mybucket-storage\" (UniqueName: \"kubernetes.io/csi/s3.csi.aws.com^s3-csi-driver-volume-mybucket-storage\") pod \"ee13ebc2-e424-47da-aa81-c8f627f69782\" (UID: \"ee13ebc2-e424-47da-aa81-c8f627f69782\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/vol_data.json]: open /var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/vol_data.json: no such file or directory" err="UnmountVolume.NewUnmounter failed for volume \"s3-mybucket-storage\" (UniqueName: \"kubernetes.io/csi/s3.csi.aws.com^s3-csi-driver-volume-mybucket-storage\") pod \"ee13ebc2-e424-47da-aa81-c8f627f69782\" (UID: \"ee13ebc2-e424-47da-aa81-c8f627f69782\") : kubernetes.io/csi: unmounter failed to load volume data file [/var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/mount]: kubernetes.io/csi: failed to open volume data file [/var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/vol_data.json]: open /var/lib/kubelet/pods/ee13ebc2-e424-47da-aa81-c8f627f69782/volumes/kubernetes.io~csi/s3-mybucket-storage/vol_data.json: no such file or directory"

What you expected to happen?
S3 volume should be unmounted and the pod should terminate normally.

How to reproduce it (as minimally and precisely as possible)?
S3 PV and PVC:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: s3-mybucket-storage
spec:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1200Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: s3-mybucket-storage
    namespace: coder
  csi:
    driver: s3.csi.aws.com
    volumeAttributes:
      bucketName: mybucket-af-south-1-storage
    volumeHandle: s3-csi-driver-volume-mybucket-storage
  mountOptions:
  - region af-south-1
  - uid=1000
  - gid=1000
  - allow-other
  - read-only
  persistentVolumeReclaimPolicy: Retain
  volumeMode: Filesystem

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: s3-mybucket-storage
  namespace: coder
spec:
  accessModes:
  - ReadOnlyMany
  resources:
    requests:
      storage: 1200Gi
  storageClassName: ""
  volumeMode: Filesystem
  volumeName: s3-mybucket-storage

Create a pod that mounts an s3 pvc:

apiVersion: v1
kind: Pod
metadata:
  name: s3-mybucket-storage-pod
  namespace: coder
spec:
  containers:
  - name: s3-mybucket-storage-container
    image: busybox
    command: ["sh", "-c", "while true; do sleep 3600; done"]
    volumeMounts:
    - name: s3-mybucket-storage
      mountPath: /mnt/s3
  volumes:
  - name: s3-mybucket-storage
    persistentVolumeClaim:
      claimName: s3-mybucket-storage
kubectl -n coder describe pod s3-mybucket-storage-pod

There will be a Warning event that the s3 PVC could not be mounted:

   Warning  FailedMount  6s    kubelet            MountVolume.SetUp failed for volume "s3-teneo-storage" : rpc error: code = Internal desc = Could not mount "isazi-hudson-teneo-af-south-1-storage" at "/var/lib/kubelet/pods/3dc25df6-2e3e-4c79-800d-ec4ad50ca70c/volumes/kubernetes.io~csi/s3-teneo-storage/mount": Mount failed: Failed to start service output: Error: mount point /var/lib/kubelet/pods/3dc25df6-2e3e-4c79-800d-ec4ad50ca70c/volumes/kubernetes.io~csi/s3-teneo-storage/mount is already mounted Error: Failed to create mount process

Anything else we need to know?:

Environment

  • Kubernetes version (use kubectl version):
Client Version: v1.30.6+k3s1
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.30.6+k3s1
  • Driver version: v1.10.0
@unexge
Copy link
Contributor

unexge commented Dec 17, 2024

Hey @geniass, sorry for the late response. I've applied your manifest (by replacing your bucket and region with mine) and it worked fine for me.

The error message you got suggest that there was a faulty mount from a previous run, and it didn't get unmounted somehow. We perform unmounting in our NodeUnpublishVolume CSI method, seems like there was an error in this call, and it resulted such an outcome.

For us to debug your issue, would you be able to share logs from the CSI Driver when you experience this error? You can get the logs via kubectl -n kube-system logs -lapp=s3-csi-node -c s3-plugin, especially log lines starting withNodePublishVolume and NodeUnpublishVolume would be very useful to understand what's going on.

@geniass
Copy link
Author

geniass commented Dec 20, 2024

Sorry forgot to respond to this.

This is pretty much all I can see in the logs:

	2024-12-18 11:01:31.317	
I1218 09:01:31.317148       1 node.go:207] NodeGetCapabilities: called with args 
	
	
2024-12-18 10:59:59.654	
I1218 08:59:59.653939       1 node.go:207] NodeGetCapabilities: called with args 
	
	
2024-12-18 10:59:50.689	
E1218 08:59:50.689314       1 driver.go:113] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount": Unmount failed: Non zero status code: 32 unmount output: umount: /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount: not mounted. 
	
	
2024-12-18 10:59:50.662	
I1218 08:59:50.661856       1 node.go:189] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount
	
	
2024-12-18 10:59:50.661	
I1218 08:59:50.660447       1 node.go:163] NodeUnpublishVolume: called with args volume_id:"s3-csi-driver-volume-customer-storage" target_path:"/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount" 
	
	
2024-12-18 10:58:52.705	
I1218 08:58:52.705911       1 node.go:207] NodeGetCapabilities: called with args 
	
	
2024-12-18 10:57:48.581	
E1218 08:57:48.580081       1 driver.go:113] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount": Unmount failed: Non zero status code: 32 unmount output: umount: /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount: not mounted. 
	
	
2024-12-18 10:57:48.552	
I1218 08:57:48.552436       1 node.go:189] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount
	
	
2024-12-18 10:57:48.551	
I1218 08:57:48.550906       1 node.go:163] NodeUnpublishVolume: called with args volume_id:"s3-csi-driver-volume-customer-storage" target_path:"/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount" 
	
	
2024-12-18 10:57:13.529	
I1218 08:57:13.529015       1 node.go:207] NodeGetCapabilities: called with args 
	
	
2024-12-18 10:55:46.454	
E1218 08:55:46.454495       1 driver.go:113] GRPC error: rpc error: code = Internal desc = Could not unmount "/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount": Unmount failed: Non zero status code: 32 unmount output: umount: /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount: not mounted. 
	
	
2024-12-18 10:55:46.428	
I1218 08:55:46.427474       1 node.go:189] NodeUnpublishVolume: unmounting /var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount
	
	
2024-12-18 10:55:46.423	
I1218 08:55:46.423286       1 node.go:163] NodeUnpublishVolume: called with args volume_id:"s3-csi-driver-volume-customer-storage" target_path:"/var/lib/kubelet/pods/2c8efe52-28c0-4c75-be7a-ecb016d838a6/volumes/kubernetes.io~csi/s3-customer-storage/mount" 

@unexge
Copy link
Contributor

unexge commented Dec 20, 2024

It looks like the mount operation in NodePublishVolume has been failed. Do you have any logs from that operation? Also, can you see mounts if you do mount | grep mountpoint-s3 in your node?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants