Skip to content
This repository has been archived by the owner on Sep 9, 2024. It is now read-only.

Parallel infer on multiple GPUs #53

Open
air55555 opened this issue Aug 8, 2022 · 0 comments
Open

Parallel infer on multiple GPUs #53

air55555 opened this issue Aug 8, 2022 · 0 comments

Comments

@air55555
Copy link

air55555 commented Aug 8, 2022

Would be nice to make HCube inferencw faster using parallel model deploy into 2+ GPUs. Any options ?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant