You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm working on implementing Clair3 into our HPC cluster recently. I encountered a tritonclient module missing issue when executing run_clair3.sh without --disable_c_impl option, which is supposed to call CallVariantsFromCffi.py function.
I believe this might be due to the absence of the NVIDIA Triton Inference Server or its related dependencies in our HPC environment.
Could you please confirm whether the NVIDIA Triton Inference Server is a mandatory requirement for running Clair3? If so, which version of the NVIDIA Triton Inference Server is required by Clair3?
It would be greatly helpful to include the requirement of NVIDIA Triton Inference Server in the Clair3 documentation.
Thank you.
Regards,
Emik
The text was updated successfully, but these errors were encountered:
NVIDIA Triton Inference Server is not mandatory. Are you testing the gpu mode? Please make sure --use_gpu option is disabled, as it was only for internal testing purpose.
I intended to test GPU mode upon user request. Although we can notify users not to use GPU mode formally if it isn't for production use currently, there are cases where users need the GPU mode indeed for their testing/researching purposes.
As a system admin, I would like to satisfy the basic demands for different usage of users. So, if installing the NVIDIA Triton Inference Server simply resolves this issue, we can leave the choice to our users.
Yes, that installing the Triton Inference Server solves the installation problem. It's just the GPU model was not intensively tested thus use with caution.
Hello developers,
I'm working on implementing Clair3 into our HPC cluster recently. I encountered a
tritonclient
module missing issue when executingrun_clair3.sh
without--disable_c_impl
option, which is supposed to callCallVariantsFromCffi.py
function.I believe this might be due to the absence of the NVIDIA Triton Inference Server or its related dependencies in our HPC environment.
Could you please confirm whether the NVIDIA Triton Inference Server is a mandatory requirement for running Clair3? If so, which version of the NVIDIA Triton Inference Server is required by Clair3?
It would be greatly helpful to include the requirement of NVIDIA Triton Inference Server in the Clair3 documentation.
Thank you.
Regards,
Emik
The text was updated successfully, but these errors were encountered: