configs:
- config_name: default
data_files: '*.tsv'
sep: "\t"
size_categories:
- n<1K
above models sorted by amount of capabilities (lazy method - character count); #legend
Cloned the GitHub repo for easier viewing and embedding the above table as once requested by @reach-vb: https://github.com/Vaibhavs10/open-tts-tracker/issues/30#issuecomment-1946367525
π£οΈ Open TTS Tracker
A one stop shop to track all open-access/ source TTS models as they come out. Feel free to make a PR for all those that aren't linked here.
This is aimed as a resource to increase awareness for these models and to make it easier for researchers, developers, and enthusiasts to stay informed about the latest advancements in the field.
This repo will only track open source/access codebase TTS models. More motivation for everyone to open-source! π€
Name | GitHub | Weights | License | Fine-tune | Languages | Paper | Demo | Issues |
---|---|---|---|---|---|---|---|---|
AI4Bharat | Repo | Hub | MIT | Yes | Indic | Paper | Demo | |
Amphion | Repo | Hub | MIT | No | Multilingual | Paper | π€ Space | |
Bark | Repo | Hub | MIT | No | Multilingual | Paper | π€ Space | |
EmotiVoice | Repo | GDrive | Apache 2.0 | Yes | ZH + EN | Not Available | Not Available | Separate GUI agreement |
F5-TTS | Repo | Hub | MIT | Yes | ZH + EN | Paper | π€ Space | |
Glow-TTS | Repo | GDrive | MIT | Yes | English | Paper | GH Pages | |
GPT-SoVITS | Repo | Hub | MIT | Yes | Multilingual | Not Available | Not Available | |
HierSpeech++ | Repo | GDrive | MIT | No | KR + EN | Paper | π€ Space | |
IMS-Toucan | Repo | GH release | Apache 2.0 | Yes | ALL* | Paper | π€ Space π€ Space* | |
MahaTTS | Repo | Hub | Apache 2.0 | No | English + Indic | Not Available | Recordings, Colab | |
Matcha-TTS | Repo | GDrive | MIT | Yes | English | Paper | π€ Space | GPL-licensed phonemizer |
MeloTTS | Repo | Hub | MIT | Yes | Multilingual | Not Available | π€ Space | |
MetaVoice-1B | Repo | Hub | Apache 2.0 | Yes | Multilingual | Not Available | π€ Space | |
Neural-HMM TTS | Repo | GitHub | MIT | Yes | English | Paper | GH Pages | |
OpenVoice | Repo | Hub | MIT | No | Multilingual | Paper | π€ Space | |
OverFlow TTS | Repo | GitHub | MIT | Yes | English | Paper | GH Pages | |
Parler TTS | Repo | Hub | Apache 2.0 | Yes | English | Not Available | π€ Space | |
pflowTTS | Unofficial Repo | GDrive | MIT | Yes | English | Paper | Not Available | GPL-licensed phonemizer |
Pheme | Repo | Hub | CC-BY | Yes | English | Paper | π€ Space | |
Piper | Repo | Hub | MIT | Yes | Multilingual | Not Available | Not Available | GPL-licensed phonemizer |
RAD-MMM | Repo | GDrive | MIT | Yes | Multilingual | Paper | Jupyter Notebook, Webpage | |
RAD-TTS | Repo | GDrive | MIT | Yes | English | Paper | GH Pages | |
Silero | Repo | GH links | CC BY-NC-SA | No | Multilingual | Not Available | Not Available | Non Commercial |
StyleTTS 2 | Repo | Hub | MIT | Yes | English | Paper | π€ Space | GPL-licensed phonemizer |
Tacotron 2 | Unofficial Repo | GDrive | BSD-3 | Yes | English | Paper | Webpage | |
TorToiSe TTS | Repo | Hub | Apache 2.0 | Yes | English | Technical report | π€ Space | |
TTTS | Repo | Hub | MPL 2.0 | No | ZH | Not Available | Colab, π€ Space | |
VALL-E | Unofficial Repo | Not Available | MIT | Yes | NA | Paper | Not Available | |
VITS/ MMS-TTS | Repo | Hub / MMS | Apache 2.0 | Yes | English | Paper | π€ Space | GPL-licensed phonemizer |
WhisperSpeech | Repo | Hub | MIT | No | English, Polish | Not Available | π€ Space, Recordings, Colab | |
XTTS | Repo | Hub | CPML | Yes | Multilingual | Paper | π€ Space | Non Commercial |
xVASynth | Repo | Hub | GPL-3.0 | Yes | Multilingual | Papers | π€ Space | Base model trained on copyrighted materials. |
- Multilingual - Amount of supported languages is ever changing, check the Space and Hub which specific languages are supported
- ALL - Supports all natural languages; may not support artificial/contructed languages
Legend
For the above TTS capability table. Open the viewer in another window or even another monitor to keep both it and the legend in view.
- Processor β‘ - Inference done by
- CPU (CPUs = multithreaded) - All models can be run on CPU, so real-time factor should be below 2.0 to qualify for CPU tag, though some more leeway can be given if it supports audio streaming
- CUDA by NVIDIAβ’
- ROCm by AMDβ’
- Phonetic alphabet π€ - Phonetic transcription that allows to control pronunciation of words before inference
- Insta-clone π₯ - Zero-shot model for quick voice cloning
- Emotion control π - Able to force an emotional state of speaker
- π <# emotions> ( π‘ anger; π happiness; π sadness; π― surprise; π€« whispering; π friendlyness )
- strict insta-clone switch ππ₯ - cloned on sample with specific emotion; may sound different than normal speaking voice; no ability to go in-between states
- strict control through prompt ππ - prompt input parameter
- Prompting π - Also a side effect of narrator based datasets and a way to affect the emotional state
- π - Prompt as a separate input parameter
- π£π - The prompt itself is also spoken by TTS; ElevenLabs docs
- Streaming support π - Can playback audio while it is still being generated
- Speech control π - Ability to change the pitch, duration, etc. for the whole and/or per-phoneme of the generated speech
- Voice conversion / Speech-To-Speech π¦ - Streaming support implies real-time S2S; S2T=>T2S does not count
- Longform synthesis π - Able to synthesize whole paragraphs, as some TTS models tend to break down after a certain audio length limit
Example if the proprietary ElevenLabs were to be added to the capabilities table:
Name | Processor β‘ |
Phonetic alphabet π€ |
Insta-clone π₯ |
Emotional control π |
Prompting π |
Speech control π |
Streaming support π |
Voice conversion π¦ |
Longform synthesis π |
---|---|---|---|---|---|---|---|---|---|
ElevenLabs | CUDA | IPA, ARPAbet | π₯ | ππ | π£π | π stability, voice similarity | π | π¦ | π Projects |
More info on how the capabilities table can be found within the GitHub Issue.
Please create pull requests to update the info on the models.