license: apache-2.0
language:
- en
base_model:
- yl4579/StyleTTS2-LJSpeech
pipeline_tag: text-to-speech
🚨 This repository is undergoing maintenance.
✨ Model v1.0 release is underway! Things are not yet finalized, but you can start using v1.0 now.
✨ You can now pip install kokoro
, a dedicated inference library: https://github.com/hexgrad/kokoro
✨ You can also pip install misaki
, a G2P library designed for Kokoro: https://github.com/hexgrad/misaki
♻️ You can access old files for v0.19 at https://huggingface.co/hexgrad/kLegacy/tree/main/v0.19
❤️ Kokoro Discord Server: https://discord.gg/QuGxSWBfQy
Kokoro is getting an upgrade!
Model | Published | Training Data | Compute (A100 80GB) | Released Voices | Released Langs |
---|---|---|---|---|---|
v0.19 | 2024 Dec 25 | <100 hrs | 500 hrs @ $400 | 10 | 1 |
v1.0 | 2025 Jan 27 | Few hundred hrs | 1000 hrs @ $1000 | 31+ | 3+ |
Training is continuous. The v0.19 model was produced "on the way" to the v1.0 model, so the Compute footprints overlap.
Voices and Languages
Voices are listed in VOICES.md. Not all voices are created equal:
- Subjectively, voices will sound better or worse to different people.
- Objectively, having less training data for a given voice (minutes instead of hours) lowers inference quality.
- Objectively, poor audio quality in training data (compression, sample rate, artifacts) lowers inference quality.
- Objectively, text-audio misalignment alignment (too much text i.e. hallucinations, or not enough text i.e. failed transcriptions) lowers inference quality.
Support for non-English languages may be absent or thin due to weak G2P and/or lack of training data. Some languages are only represented by a small handful or even just one voice (French).
Most voices perform best on a "goldilocks range" of 100-200 tokens out of ~500 possible. Voices may perform worse at the extremes:
- Weakness on short utterances, especially less than 10-20 tokens. Root cause could be lack of short-utterance training data and/or model architecture. One possible inference mitigation is to bundle shorter utterances together.
- Rushing on long utterances, especially over 400 tokens. You can chunk down to shorter utterances or adjust the
speed
parameter to mitigate this.
Usage
The following can be run in a single cell on Google Colab.
# 1️⃣ Install kokoro
!pip install -q kokoro>=0.2.3 soundfile
# 2️⃣ Install espeak, used for out-of-dictionary fallback
!apt-get -qq -y install espeak-ng > /dev/null 2>&1
# You can skip espeak installation, but OOD words will be skipped unless you provide a fallback
# 3️⃣ Initalize a pipeline
from kokoro import KPipeline
from IPython.display import display, Audio
import soundfile as sf
# 🇺🇸 'a' => American English
# 🇬🇧 'b' => British English
# 🇫🇷 'f' => French fr-fr
# 🇮🇳 'h' => Hindi hi
pipeline = KPipeline(lang_code='a') # make sure lang_code matches voice
# The following text is for demonstration purposes only, unseen during training
text = '''
The sky above the port was the color of television, tuned to a dead channel.
"It's not like I'm using," Case heard someone say, as he shouldered his way through the crowd around the door of the Chat. "It's like my body's developed this massive drug deficiency."
It was a Sprawl voice and a Sprawl joke. The Chatsubo was a bar for professional expatriates; you could drink there for a week and never hear two words in Japanese.
These were to have an enormous impact, not only because they were associated with Constantine, but also because, as in so many other areas, the decisions taken by Constantine (or in his name) were to have great significance for centuries to come. One of the main issues was the shape that Christian churches were to take, since there was not, apparently, a tradition of monumental church buildings when Constantine decided to help the Christian church build a series of truly spectacular structures. The main form that these churches took was that of the basilica, a multipurpose rectangular structure, based ultimately on the earlier Greek stoa, which could be found in most of the great cities of the empire. Christianity, unlike classical polytheism, needed a large interior space for the celebration of its religious services, and the basilica aptly filled that need. We naturally do not know the degree to which the emperor was involved in the design of new churches, but it is tempting to connect this with the secular basilica that Constantine completed in the Roman forum (the so-called Basilica of Maxentius) and the one he probably built in Trier, in connection with his residence in the city at a time when he was still caesar.
'''
# text = 'Le dromadaire resplendissant déambulait tranquillement dans les méandres en mastiquant de petites feuilles vernissées.'
# text = 'ट्रांसपोर्टरों की हड़ताल लगातार पांचवें दिन जारी, दिसंबर से इलेक्ट्रॉनिक टोल कलेक्शनल सिस्टम'
# 4️⃣ Generate, display, and save audio files in a loop.
generator = pipeline(
text, voice='af_bella', # <= change voice here
speed=1, split_pattern=r'\n+'
)
for i, (gs, ps, audio) in enumerate(generator):
print(i) # i => index
print(gs) # gs => graphemes/text
print(ps) # ps => phonemes
display(Audio(data=audio, rate=24000, autoplay=i==0))
sf.write(f'{i}.wav', audio, 24000) # save each audio file
Model Facts
Architecture:
- StyleTTS 2: https://arxiv.org/abs/2306.07691
- ISTFTNet: https://arxiv.org/abs/2203.02395
- Decoder only: no diffusion, no encoder release
Architected by: Li et al @ https://github.com/yl4579/StyleTTS2
Trained by: @rzvzn
on Discord
Supported Languages: American English, British English
Model SHA256 Hash: 496dba118d1a58f5f3db2efc88dbdc216e0483fc89fe6e47ee1f2c53f18ad1e4
Training Details
Compute: About $1000 for 1000 hours of A100 80GB vRAM
Data: Kokoro was trained exclusively on permissive/non-copyrighted audio data and IPA phoneme labels. Examples of permissive/non-copyrighted audio include:
- Public domain audio
- Audio licensed under Apache, MIT, etc
- Synthetic audio[1] generated by closed[2] TTS models from large providers
[1] https://copyright.gov/ai/ai_policy_guidance.pdf
[2] No synthetic audio from open TTS models or "custom voice clones"
Total Dataset Size: A few hundred hours of audio
Creative Commons Attribution
The following CC BY audio was part of the dataset used to train Kokoro v1.0.
Audio Data | Duration Used | License | Added to Training Set After |
---|---|---|---|
Koniwa tnc |
<1h | CC BY 3.0 | v0.19 / 22 Nov 2024 |
SIWIS | <11h | CC BY 4.0 | v0.19 / 22 Nov 2024 |
