-
-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chatTTS Support #8
Comments
I looked into chatTTS integration a while ago. It isn't really high on my priority list because it's a very resource-consuming TTS and can hardly run in real-time unless you have a powerful Nvidia GPU. It doesn't seem very difficult, though. I will probably implement it later. However, I will probably not use the ChatTTS-ui as I don't see why I should use it instead of just using the original chatTTS. The original chatTTS seems fine and contains some code to host an API server. |
I was able to run ChatTTS-UI from an intel gpu and use the api to call my hardware. |
Does the original ChatTTS not work on intel GPU? Are there any significant advantages to using ChatTTS-UI over the original ChatTTS? |
I think there is more flexibility to connect through the API, even with a self-built back end like mine, the api is exactly the same, I have verified this in the older version of llm-vtuber and it works fine. |
The following zip file contains the python implementation of ChatTTS(https://github.com/2noise/chattts) and the ChatTTS-UI(https://github.com/jianchang512/ChatTTS-ui) based API implementation.
But it was written for an old version, can it be modified and integrated into a new version?
chatTTS.zip
The text was updated successfully, but these errors were encountered: