Replies: 1 comment
-
We hve similar problems. We need to ship all .so files needed to our customers. We can't rely on what's pre-installed on the machine or be affected by it. This works for onnxruntime.so itself but for secondary .so files we need to be able to set RPATH to . in onnxrutime.so so that it finds its onxxruntime_providers_cuda so file next to itself and the rpath of onxxruntime_providers_cuda so for it to find cuDNN's so files next to itself. On Windows we have a similar problem with the fact that Windows now comes with an old DirectML.dll which is not compatible with latest onnxruntime.dll but will be found instead of the one we distribute if the executable is not in the same directory as the dlls. In this case it seems that we have to somehow rename the file from DirectML.dll and load it under this new name, via the onnxruntime cmake system. I can't see another solution for this. |
Beta Was this translation helpful? Give feedback.
-
Hey,
I'd like to be able to point to a custom install location for cuda and cudnn (not
/usr/local/cuda
). I believed this should be possible with theCUDA_PATH
andcuDNN_PATH
env but this doesn't seem to work. I'm using onnxruntime v1.13.1 and build using the following options:building goes well but when I try to create an inference session I get the following error:
Any idea why this doesn't work? The
${CUDA_PATH}/lib64/libcublasLt.so.11
file does exist.When I put the cuda files in
/usr/local/cuda
it does work, even whenCUDA_PATH
is not equal to/usr/local/cuda
Here are the different env vars I've tried:
NOTE: running this fixes it:
but that is not a solution since I'd like to install this on machines locally without modifying the root files.
Additionally I'd like to be able to do this for Windows as well.
Beta Was this translation helpful? Give feedback.
All reactions