|
--- |
|
license: mit |
|
tags: |
|
- vits |
|
pipeline_tag: text-to-speech |
|
--- |
|
|
|
# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech |
|
|
|
VITS is an end-to-end speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a |
|
conditional variational autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. This repository |
|
contains the weights for the official VITS checkpoint trained on the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset. |
|
|
|
## Model Details |
|
|
|
VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end |
|
speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational |
|
autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior. |
|
|
|
A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based |
|
text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers, |
|
much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text |
|
input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to |
|
synthesise speech with different rhythms from the same input text. |
|
|
|
The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training. |
|
To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During |
|
inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the |
|
waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor, |
|
the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform. |
|
|
|
There are two variants of the VITS model: one is trained on the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset, |
|
and the other is trained on the [VCTK](https://huggingface.co/datasets/vctk) dataset. LJ Speech dataset consists of 13,100 short |
|
audio clips of a single speaker with a total length of approximately 24 hours. The VCTK dataset consists of approximately 44,000 |
|
short audio clips uttered by 109 native English speakers with various accents. The total length of the audio clips is approximately |
|
44 hours. |
|
|
|
| Checkpoint | Train Hours | Speakers | |
|
|------------|-------------|----------| |
|
| [vits-ljs](https://huggingface.co/kakao-enterprise/vits-ljs) | 24 | 1 | |
|
| [vits-vctk](https://huggingface.co/kakao-enterprise/vits-vctk) | 44 | 109 | |
|
|
|
## Usage |
|
|
|
VITS is available in the 🤗 Transformers library from version 4.33 onwards. To use this checkpoint, |
|
first install the latest version of the library: |
|
|
|
``` |
|
pip install --upgrade transformers accelerate |
|
``` |
|
|
|
Then, run inference with the following code-snippet: |
|
|
|
```python |
|
from transformers import VitsModel, AutoTokenizer |
|
import torch |
|
|
|
model = VitsModel.from_pretrained("kakao-enterprise/vits-ljs") |
|
tokenizer = AutoTokenizer.from_pretrained("kakao-enterprise/vits-ljs") |
|
|
|
text = "Hey, it's Hugging Face on the phone" |
|
inputs = tokenizer(text, return_tensors="pt") |
|
|
|
with torch.no_grad(): |
|
output = model(**inputs).waveform |
|
``` |
|
|
|
The resulting waveform can be saved as a `.wav` file: |
|
|
|
```python |
|
import scipy |
|
|
|
scipy.io.wavfile.write("techno.wav", rate=model.config.sampling_rate, data=output) |
|
``` |
|
|
|
Or displayed in a Jupyter Notebook / Google Colab: |
|
|
|
```python |
|
from IPython.display import Audio |
|
|
|
Audio(output, rate=model.config.sampling_rate) |
|
``` |
|
|
|
## BibTex citation |
|
|
|
This model was developed by Jaehyeon Kim et al. from Kakao Enterprise. If you use the model, consider citing the VITS paper: |
|
|
|
``` |
|
@inproceedings{kim2021conditional, |
|
title={"Conditional Variational Autoencoder with Adversarial Learning for End-to-end Text-to-speech"}, |
|
author={Kim, Jaehyeon and Kong, Jungil and Son, Juhee}, |
|
booktitle={International Conference on Machine Learning}, |
|
pages={5530--5540}, |
|
year={2021}, |
|
organization={PMLR} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
The model is licensed as [**MIT**](https://github.com/jaywalnut310/vits/blob/main/LICENSE). |