You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is the issue report properly structured and detailed with version numbers?
Is this for Kubeflow development ?
Would you like to work on this issue?
You can join the CNCF Slack and access our meetings at the Kubeflow Community website. Our channel on the CNCF Slack is here #kubeflow-platform.
Version
master
Describe your issue
I have created a script to set storage limits for new namespaces in our Minikube cluster, as shown below:
TEAM_NAME=${3:-default-team}
CPU_LIMIT=${4:-1}
MEMORY_LIMIT=${5:-1Gi}
GPU_LIMIT=${6:-0}
STORAGE_LIMIT=${7:-1Gi}
USER_PASSWORD=${9:-}
This allows me to set CPU, memory, and storage limits for each namespace, including a storage limit of 1Gi. However, when users download models from Hugging Face, the models exceed the specified storage limit. The downloaded models are placed in a .cache directory, bypassing the storage limits set for the namespace.
I need to ensure that the total storage, including the models downloaded from Hugging Face, does not exceed the storage limit specified for the namespace.
Could you suggest a way to enforce this storage cap to include Hugging Face models or any way to restrict the .cache directory usage to stay within the namespace storage limits?
Steps to reproduce the issue
Create a namespace with storage limits using the script above.
Deploy applications that require downloading models from Hugging Face in the namespace.
Ensure the namespace storage limit is set to a specific value (e.g., 1Gi).
Download a Hugging Face model from within the namespace.
Check the total storage usage within the namespace and observe that it exceeds the specified limit due to the models being stored in the .cache directory thereby exceeding the capped storage.
Put here any screenshots or videos (optional)
No response
The text was updated successfully, but these errors were encountered:
Validation Checklist
Version
master
Describe your issue
I have created a script to set storage limits for new namespaces in our Minikube cluster, as shown below:
TEAM_NAME=${3:-default-team}
CPU_LIMIT=${4:-1}
MEMORY_LIMIT=${5:-1Gi}
GPU_LIMIT=${6:-0}
STORAGE_LIMIT=${7:-1Gi}
USER_PASSWORD=${9:-}
This allows me to set CPU, memory, and storage limits for each namespace, including a storage limit of 1Gi. However, when users download models from Hugging Face, the models exceed the specified storage limit. The downloaded models are placed in a .cache directory, bypassing the storage limits set for the namespace.
I need to ensure that the total storage, including the models downloaded from Hugging Face, does not exceed the storage limit specified for the namespace.
Could you suggest a way to enforce this storage cap to include Hugging Face models or any way to restrict the .cache directory usage to stay within the namespace storage limits?
Steps to reproduce the issue
Create a namespace with storage limits using the script above.
Deploy applications that require downloading models from Hugging Face in the namespace.
Ensure the namespace storage limit is set to a specific value (e.g., 1Gi).
Download a Hugging Face model from within the namespace.
Check the total storage usage within the namespace and observe that it exceeds the specified limit due to the models being stored in the .cache directory thereby exceeding the capped storage.
Put here any screenshots or videos (optional)
No response
The text was updated successfully, but these errors were encountered: