You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I installed llama-cpp-python to a miniconda env (with some errors) using CUDACXX=/usr/local/cuda-12.3/bin/nvcc CMAKE_ARGS="-DLLAMA_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip install -e . --no-cache-dir --force-reinstall --upgrade --verbose
Now. this process takes about 20 minutes.. .
I'd like to "clone this" to other conda envs, but without going through the hassles of recompiling. These other envs have slightly different versions of other packages. I would like to install an editable version of llama-cpp-python to these.
Ideally, I'd like to dump the llama.cpp .so binaries and the like to a single location (say /pkgs/llama-cpp-python/) and have all my miniconda envs use the same EDITABLE llama-cpp-python directory for the python bits)
Because each conda env is ultimately using the same nvidia tools (in /usr/local/cuda/bin) I do not need to recompile the binary .so files for each venv. It's a waste of resources, and time)
How do I go about doing this? Is this even possible? The key thing here is that I'd like to share the same .so files for each miniconda env.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I installed llama-cpp-python to a miniconda env (with some errors) using
CUDACXX=/usr/local/cuda-12.3/bin/nvcc CMAKE_ARGS="-DLLAMA_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip install -e . --no-cache-dir --force-reinstall --upgrade --verbose
Now. this process takes about 20 minutes.. .
I'd like to "clone this" to other conda envs, but without going through the hassles of recompiling. These other envs have slightly different versions of other packages. I would like to install an editable version of llama-cpp-python to these.
Ideally, I'd like to dump the llama.cpp .so binaries and the like to a single location (say /pkgs/llama-cpp-python/) and have all my miniconda envs use the same EDITABLE llama-cpp-python directory for the python bits)
Because each conda env is ultimately using the same nvidia tools (in /usr/local/cuda/bin) I do not need to recompile the binary .so files for each venv. It's a waste of resources, and time)
How do I go about doing this? Is this even possible? The key thing here is that I'd like to share the same .so files for each miniconda env.
Thank you.
Beta Was this translation helpful? Give feedback.
All reactions