-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU not activating? #799
Comments
请贴出更详细信息:cuda/cpu下的启动+推理日志。 |
Hi @fumiama, I have encountered this issue also:
output:
Could you please help me to review this, thanks in advance~ |
你ChatTTS疑似版本有点老,请先升级最新版再尝试。 |
@fumiama 您好, 我用的main分支的代码,我该切换分支还是pip安装chattts? |
不用。如果是最新代码,那应该已经正常使用了GPU,速度就是这样。但是你也可以尝试开启vLLM,这个功能目前仍在实验,有不少不支持的操作。详情参考README。 |
@fumiama Oh, interesting, 由于之前用的PaddleSpeech, 它推理速度到了一秒内让我误以为我这边出了问题,谢谢, 接下来我会尝试使用CUDA环境的torch指定cpu看下推理速度,然后再安装cpu环境的torch确认下这个问题,十分感谢! |
您好 @fumiama,感觉使用CUDA和不使用CUDA的性能区别不大:
下面是
|
看起来CUDA比CPU还慢,可以在任务管理器看看GPU有无跑满。 |
我有一张3060 12g 运行的时候 torch.cuda.is_available() 是 true, 但是我总感觉速度很慢。于是我把 chat.load(compile=False,device="cuda") 改成了 chat.load(compile=False,device="cpu") 他俩都是一个速度。我要怎么解决?
The text was updated successfully, but these errors were encountered: