Replies: 1 comment
-
I have handled it by only putting linked library This line
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have my own virtual env all requirements are latest versions and everything works fine with ROCM 6.0.1 with RX 7600 XT , however. When I start webui with python server.py --listen , it tries to load llama_cpp_python_cuda but fails. It says llama_cpp already limported and I have to restart the webui
I have llama_cpp with HIP installed and important and working fine, utilising also gpu loading all layers to it. I have linked it as also cuda lib in site packages. Imports well as both llama_cpp and llama_cpp_python_cuda.
Now I am trying to edit llama_cpp_python_hijack.py (3.11 cypython in pycache)
Any pointers ?
How can I solve this ?
Beta Was this translation helpful? Give feedback.
All reactions