You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setup a download setup (if possible) wherein the big wheels:
llama_cpp_python_cuda_tensorcores
exllamav2
llama_cpp_python_cuda
are downloaded once and then hash checked locally instead of being redownloaded every single time.
This should be done because these combined wheels are about 1GB and fail to be cached by pip. This is seemingly due to them being hosted by GitHub.
Description
Setup a download setup (if possible) wherein the big wheels:
llama_cpp_python_cuda_tensorcores
exllamav2
llama_cpp_python_cuda
are downloaded once and then hash checked locally instead of being redownloaded every single time.
This should be done because these combined wheels are about 1GB and fail to be cached by pip. This is seemingly due to them being hosted by GitHub.
I will admit this seems to be a general issue with pip as seen here https://discuss.python.org/t/what-are-the-caching-rules-for-wheels-installed-from-urls/21594/2
The text was updated successfully, but these errors were encountered: