Text-to-Speech
coqui
XTTS-v2 / README.md
gorkemgoknar's picture
add samples
bcc2e26
|
raw
history blame
4.99 kB
metadata
license: other
license_name: coqui-public-model-license
license_link: https://coqui.ai/cpml
library_name: coqui
pipeline_tag: text-to-speech
widget:
  - src: samples/en-sample.wav
    output:
      text: Once when I was six years old I saw a magnificent picture
  - src: samples/de-sample.wav
    output:
      text: Als ich sechs war, sah ich einmal ein wunderbares Bild
  - src: samples/es-sample.wav
    output:
      text: Cuando tenía seis años, vi una vez una imagen magnífica
  - src: samples/fr-sample.wav
    output:
      text: Lorsque j'avais six ans j'ai vu, une fois, une magnifique image
  - src: samples/ja-sample.wav
    output:
      text: かつて 六歳のとき、素晴らしい絵を見ました
  - src: samples/tr-sample.wav
    output:
      text: Bir zamanlar, altı yaşındayken, muhteşem bir resim gördüm
  - src: samples/zh-cn-esample.wav
    output:
      text: 当我还只有六岁的时候, 看到了一副精彩的插画
  - src: samples/pt-sample.wav
    output:
      text: Quando eu tinha seis anos eu vi, uma vez, uma imagem magnífica

ⓍTTS

ⓍTTS is a Voice generation model that lets you clone voices into different languages by using just a quick 6-second audio clip. There is no need for an excessive amount of training data that spans countless hours.

This is the same or similar model to what powers Coqui Studio and Coqui API.

Features

  • Supports 16 languages.
  • Voice cloning with just a 6-second audio clip.
  • Emotion and style transfer by cloning.
  • Cross-language voice cloning.
  • Multi-lingual speech generation.
  • 24khz sampling rate.

Updates over XTTS-v1

  • 2 new languages; Hungarian and Korean
  • Architectural improvements for speaker conditioning.
  • Enables the use of multiple speaker references and interpolation between speakers.
  • Stability improvements.
  • Better prosody and audio quality across the board.

Languages

XTTS-v2 supports 16 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu) and Korean (ko).

Stay tuned as we continue to add support for more languages. If you have any language requests, feel free to reach out!

Code

The code-base supports inference and fine-tuning.

Demo Spaces

  • XTTS Space : You can see how model performs on supported languages, and try with your own reference or microphone input
  • XTTS Voice Chat with Mistral or Zephyr : You can experience streaming voice chat with Mistral 7B Instruct or Zephyr 7B Beta

License

This model is licensed under Coqui Public Model License. There's a lot that goes into a license for generative models, and you can read more of the origin story of CPML here.

Contact

Come and join in our 🐸Community. We're active on Discord and Twitter. You can also mail us at info@coqui.ai.

Using 🐸TTS API:

from TTS.api import TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2", gpu=True)

# generate speech by cloning a voice using default settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                file_path="output.wav",
                speaker_wav="/path/to/target/speaker.wav",
                language="en")

# generate speech by cloning a voice using custom settings
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
                file_path="output.wav",
                speaker_wav="/path/to/target/speaker.wav",
                language="en",
                decoder_iterations=30)

Using 🐸TTS Command line:

 tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
     --text "Bugün okula gitmek istemiyorum." \
     --speaker_wav /path/to/target/speaker.wav \
     --language_idx tr \
     --use_cuda true

Using the model directly:

from TTS.tts.configs.xtts_config import XttsConfig
from TTS.tts.models.xtts import Xtts

config = XttsConfig()
config.load_json("/path/to/xtts/config.json")
model = Xtts.init_from_config(config)
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
model.cuda()

outputs = model.synthesize(
    "It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
    config,
    speaker_wav="/data/TTS-public/_refclips/3.wav",
    gpt_cond_len=3,
    language="en",
)