Orpheus-3b-FT-Q8_0
This is a quantised version of canopylabs/orpheus-3b-0.1-ft.
Orpheus is a high-performance Text-to-Speech model fine-tuned for natural, emotional speech synthesis. This repository hosts the 8-bit quantised version of the 3B parameter model, optimised for efficiency while maintaining high-quality output.
Model Description
Orpheus-3b-FT-Q8_0 is a 3 billion parameter Text-to-Speech model that converts text inputs into natural-sounding speech with support for multiple voices and emotional expressions. The model has been quantised to 8-bit (Q8_0) format for efficient inference, making it accessible on consumer hardware.
Key features:
- 8 distinct voice options with different characteristics
- Support for emotion tags like laughter, sighs, etc.
- Optimised for CUDA acceleration on RTX GPUs
- Produces high-quality 24kHz mono audio
- Fine-tuned for conversational naturalness
How to Use
This model is designed to be used with an LLM inference server that connects to the Orpheus-FastAPI frontend, which provides both a web UI and OpenAI-compatible API endpoints.
Compatible Inference Servers
This quantised model can be loaded into any of these LLM inference servers:
- GPUStack - GPU optimised LLM inference server (My pick) - supports LAN/WAN tensor split parallelisation
- LM Studio - Load the GGUF model and start the local server
- llama.cpp server - Run with the appropriate model parameters
- Any compatible OpenAI API-compatible server
Quick Start
Download this quantised model from lex-au/Orpheus-3b-FT-Q8_0.gguf
Load the model in your preferred inference server and start the server.
Clone the Orpheus-FastAPI repository:
git clone https://github.com/Lex-au/Orpheus-FastAPI.git
cd Orpheus-FastAPI
Configure the FastAPI server to connect to your inference server by setting the
ORPHEUS_API_URL
environment variable.Follow the complete installation and setup instructions in the repository README.
Audio Samples
Listen to the model in action with different voices and emotions:
Default Voice Sample
Leah (Happy)
Tara (Sad)
Zac (Contemplative)
Available Voices
The model supports 8 different voices:
tara
: Female, conversational, clearleah
: Female, warm, gentlejess
: Female, energetic, youthfulleo
: Male, authoritative, deepdan
: Male, friendly, casualmia
: Female, professional, articulatezac
: Male, enthusiastic, dynamiczoe
: Female, calm, soothing
Emotion Tags
You can add expressiveness to speech by inserting tags:
<laugh>
,<chuckle>
: For laughter sounds<sigh>
: For sighing sounds<cough>
,<sniffle>
: For subtle interruptions<groan>
,<yawn>
,<gasp>
: For additional emotional expression
Technical Specifications
- Architecture: Specialised token-to-audio sequence model
- Parameters: ~3 billion
- Quantisation: 8-bit (GGUF Q8_0 format)
- Audio Sample Rate: 24kHz
- Input: Text with optional voice selection and emotion tags
- Output: High-quality WAV audio
- Language: English
- Hardware Requirements: CUDA-compatible GPU (recommended: RTX series)
- Integration Method: External LLM inference server + Orpheus-FastAPI frontend
Limitations
- Currently supports English text only
- Best performance achieved on CUDA-compatible GPUs
- Generation speed depends on GPU capability
License
This model is available under the Apache License 2.0.
Citation & Attribution
The original Orpheus model was created by Canopy Labs. This repository contains a quantised version optimised for use with the Orpheus-FastAPI server.
If you use this quantised model in your research or applications, please cite:
@misc{orpheus-tts-2025,
author = {Canopy Labs},
title = {Orpheus-3b-0.1-ft: Text-to-Speech Model},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/canopylabs/orpheus-3b-0.1-ft}}
}
@misc{orpheus-quantised-2025,
author = {Lex-au},
title = {Orpheus-3b-FT-Q8_0: Quantised TTS Model with FastAPI Server},
note = {GGUF quantisation of canopylabs/orpheus-3b-0.1-ft},
year = {2025},
publisher = {HuggingFace},
howpublished = {\url{https://huggingface.co/lex-au/Orpheus-3b-FT-Q8_0.gguf}}
}
- Downloads last month
- 0