Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于fastspeech2流式推理的疑问 #3850

Open
world1tree opened this issue Sep 18, 2024 · 1 comment
Open

关于fastspeech2流式推理的疑问 #3850

world1tree opened this issue Sep 18, 2024 · 1 comment
Labels

Comments

@world1tree
Copy link

  1. fastspeech2推理时的batch size设置为1,这是否意味着一个请求处理结束,模型才会处理下一个请求?还是说因为async,模型能够同时对多个请求进行推理?如果是后者,与真正的batch推理在性能上是否仍有一定的差距?
  2. 我也查看了其它开源的TTS项目,似乎都不支持按batch进行推理。这是否是因为TTS模型相比于LLM,在batch推理上实现比较困难?还是说batch推理会增大响应时间,导致实时性更差?
@Ray961123
Copy link

开发者你好,感谢关注 PaddleSpeech 开源项目,抱歉给你带来了不好的开发体验,目前开源项目维护人力有限,你可以尝试通过修改 PaddleSpeech 源码的方式自己解决,或请求开源社区其他开发者的协助。飞桨开源社区交流频道:飞桨AI Studio星河社区-人工智能学习与实训社区

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants