Orpheus-TTS-Local
A lightweight client for running Orpheus TTS locally using LM Studio API.
{Github Repo](https://github.com/isaiahbjork/orpheus-tts-local)
Features
- ๐ง High-quality Text-to-Speech using the Orpheus TTS model
- ๐ป Completely local - no cloud API keys needed
- ๐ Multiple voice options (tara, leah, jess, leo, dan, mia, zac, zoe)
- ๐พ Save audio to WAV files
Quick Setup
- Install LM Studio
- Install the Orpheus TTS model (orpheus-3b-0.1-ft-q4_k_m.gguf) in LM Studio
- Load the Orpheus model in LM Studio
- Start the local server in LM Studio (default: http://127.0.0.1:1234)
- Install dependencies:
python3 -m venv venv source venv/bin/activate pip install -r requirements.txt
- Run the script:
python gguf_orpheus.py --text "Hello, this is a test" --voice tara
Usage
python gguf_orpheus.py --text "Your text here" --voice tara --output "output.wav"
Options
--text
: The text to convert to speech--voice
: The voice to use (default: tara)--output
: Output WAV file path (default: auto-generated filename)--list-voices
: Show available voices--temperature
: Temperature for generation (default: 0.6)--top_p
: Top-p sampling parameter (default: 0.9)--repetition_penalty
: Repetition penalty (default: 1.1)
Available Voices
- tara - Best overall voice for general use (default)
- leah
- jess
- leo
- dan
- mia
- zac
- zoe
Emotion
You can add emotion to the speech by adding the following tags:
<giggle>
<laugh>
<chuckle>
<sigh>
<cough>
<sniffle>
<groan>
<yawn>
<gasp>
License
Apache 2.0
- Downloads last month
- 958
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for isaiahbjork/orpheus-3b-0.1-ft-Q4_K_M-GGUF
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
canopylabs/orpheus-3b-0.1-pretrained
Finetuned
canopylabs/orpheus-3b-0.1-ft