Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing python module "tritonclient" when calling CallVariantsFromCffi.py #351

Open
PhoenixEmik opened this issue Dec 20, 2024 · 3 comments

Comments

@PhoenixEmik
Copy link

Hello developers,

I'm working on implementing Clair3 into our HPC cluster recently. I encountered a tritonclient module missing issue when executing run_clair3.sh without --disable_c_impl option, which is supposed to call CallVariantsFromCffi.py function.

I believe this might be due to the absence of the NVIDIA Triton Inference Server or its related dependencies in our HPC environment.

Could you please confirm whether the NVIDIA Triton Inference Server is a mandatory requirement for running Clair3? If so, which version of the NVIDIA Triton Inference Server is required by Clair3?

It would be greatly helpful to include the requirement of NVIDIA Triton Inference Server in the Clair3 documentation.

Thank you.

Regards,
Emik

@zhengzhenxian
Copy link
Collaborator

Hi,

NVIDIA Triton Inference Server is not mandatory. Are you testing the gpu mode? Please make sure --use_gpu option is disabled, as it was only for internal testing purpose.

@PhoenixEmik
Copy link
Author

Thanks for your response.

I intended to test GPU mode upon user request. Although we can notify users not to use GPU mode formally if it isn't for production use currently, there are cases where users need the GPU mode indeed for their testing/researching purposes.

As a system admin, I would like to satisfy the basic demands for different usage of users. So, if installing the NVIDIA Triton Inference Server simply resolves this issue, we can leave the choice to our users.

Regards,
Emik

@aquaskyline
Copy link
Member

Yes, that installing the Triton Inference Server solves the installation problem. It's just the GPU model was not intensively tested thus use with caution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants