You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have deployed the network-operator using the configuration shown above, and I see that on the node it displays: rdma/rdma_shared_device_a: 63.
I want to know if these 63 resources are evenly distributing the bandwidth of these IB cards.
If a pod only requests 1 rdma/rdma_shared_device_a resource, how much bandwidth can this pod use?
The text was updated successfully, but these errors were encountered:
Based on my understanding, it is merely a device-plugin that exposes the host's InfiniBand (IB) devices to containers. The kubelet can mount them into the containers, allowing the containers to use these devices internally. However, it does not have any mechanisms for bandwidth partitioning or traffic limiting.
the device plugin does not handle any BW allocation (nor am i famiiar with a kernel interface that provides that for the same IB device mounted in different containers), if there is one, let me know :)
rdmaSharedDevicePlugin:
deploy: true
resources:
- name: rdma_shared_device_a
ifNames: [ibs10, ibs11, ibs18, ibs19]
I have deployed the network-operator using the configuration shown above, and I see that on the node it displays: rdma/rdma_shared_device_a: 63.
I want to know if these 63 resources are evenly distributing the bandwidth of these IB cards.
If a pod only requests 1 rdma/rdma_shared_device_a resource, how much bandwidth can this pod use?
The text was updated successfully, but these errors were encountered: