You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.
Individual VMs in VMware have a limitation of 60 virtual disks per VM in ESXi 6.5
The Kubernetes scheduler is not aware of this limitation, and does not account for the amount of virtual disks already attached to a VM when pods with dynamic persistent vSphere storage class volumes are provisioned and therefore add more virtual disks.
Therefore no matter how large a Kubernetes cluster you can only safely ever use 60 vSphere storage class volumes across the whole cluster. If the scheduler picks a node that already has 60 virtual disks then the deployment will fail and not try reschedule onto a different node.
This maximum scale limit documentation is misleading in this and makes no mention of the limitation.
To fix this issue the vSphere driver needs to support this feature. It can do this either with a re-write of the in-tree driver, or implementing CSI as the storage interface (which will support this).
Individual VMs in VMware have a limitation of 60 virtual disks per VM in ESXi 6.5
The Kubernetes scheduler is not aware of this limitation, and does not account for the amount of virtual disks already attached to a VM when pods with dynamic persistent vSphere storage class volumes are provisioned and therefore add more virtual disks.
Therefore no matter how large a Kubernetes cluster you can only safely ever use 60 vSphere storage class volumes across the whole cluster. If the scheduler picks a node that already has 60 virtual disks then the deployment will fail and not try reschedule onto a different node.
This maximum scale limit documentation is misleading in this and makes no mention of the limitation.
https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/maximum-scale-limit.html
The text was updated successfully, but these errors were encountered: