Skip to content

moeru-ai/unspeech

Repository files navigation

unSpeech

Your Text-to-Speech Services, All-in-One.

Features

unSpeech lets you use various online TTS with OpenAI-compatible API.

Getting Started

Build

unSpeech is not released yet, you need to build it from source code:

require go 1.23+

git clone https://github.com/moeru-ai/unspeech.git
cd unspeech
go build -o ./result/unspeech ./cmd/unspeech

Run

# http server started on [::]:5933
./result/unspeech

Client

You can use unSpeech with most OpenAI clients.

The model parameter should be provider + model, e.g. openai/tts-1-hd, elevenlabs/eleven_multilingual_v2.

The Authorization header is auto-converted to the vendor's corresponding auth method, such as xi-api-key.

curl
curl http://localhost:5933/v1/audio/speech \
  -H "Authorization: Bearer $XI_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "elevenlabs/eleven_multilingual_v2",
    "input": "Hello, World!",
    "voice": "9BWtsMINqrJLrRacOk9x",
  }' \
  --output speech.mp3
import { generateSpeech } from '@xsai/generate-speech'

const speech = await generateSpeech({
  apiKey: 'YOUR_API_KEY',
  baseURL: 'http://localhost:5933/v1/',
  input: 'Hello, World!',
  model: 'elevenlabs/eleven_multilingual_v2',
  voice: '9BWtsMINqrJLrRacOk9x',
})

Related Projects

Looking for something like unSpeech, but for local TTS? check it out:

Or to use free Edge TTS:

License

AGPL-3.0

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages