sanchit-gandhi HF staff commited on
Commit
0a6cc30
1 Parent(s): 343167d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: cc-by-nc-4.0
4
+ tags:
5
+ - mms
6
+ - vits
7
+ pipeline_tag: text-to-speech
8
+ ---
9
+
10
+ # Massively Multilingual Speech (MMS): English Text-to-Speech
11
+
12
+ This repository contains the **English (eng)** language text-to-speech (TTS) model checkpoint.
13
+
14
+ This model is part of Facebook's [Massively Multilingual Speech](https://arxiv.org/abs/2305.13516) project, aiming to
15
+ provide speech technology across a diverse range of languages. You can find more details about the supported languages
16
+ and their ISO 639-3 codes in the [MMS Language Coverage Overview](https://dl.fbaipublicfiles.com/mms/misc/language_coverage_mms.html),
17
+ and see all MMS-TTS checkpoints on the Hugging Face Hub: [facebook/mms-tts](https://huggingface.co/models?sort=trending&search=facebook%2Fmms-tts).
18
+
19
+ ## Model Details
20
+
21
+ VITS (**V**ariational **I**nference with adversarial learning for end-to-end **T**ext-to-**S**peech) is an end-to-end
22
+ speech synthesis model that predicts a speech waveform conditional on an input text sequence. It is a conditional variational
23
+ autoencoder (VAE) comprised of a posterior encoder, decoder, and conditional prior.
24
+
25
+ A set of spectrogram-based acoustic features are predicted by the flow-based module, which is formed of a Transformer-based
26
+ text encoder and multiple coupling layers. The spectrogram is decoded using a stack of transposed convolutional layers,
27
+ much in the same style as the HiFi-GAN vocoder. Motivated by the one-to-many nature of the TTS problem, where the same text
28
+ input can be spoken in multiple ways, the model also includes a stochastic duration predictor, which allows the model to
29
+ synthesise speech with different rhythms from the same input text.
30
+
31
+ The model is trained end-to-end with a combination of losses derived from variational lower bound and adversarial training.
32
+ To improve the expressiveness of the model, normalizing flows are applied to the conditional prior distribution. During
33
+ inference, the text encodings are up-sampled based on the duration prediction module, and then mapped into the
34
+ waveform using a cascade of the flow module and HiFi-GAN decoder. Due to the stochastic nature of the duration predictor,
35
+ the model is non-deterministic, and thus requires a fixed seed to generate the same speech waveform.
36
+
37
+ For the MMS project, a separate VITS checkpoint is trained on each langauge.
38
+
39
+ ## Usage
40
+
41
+ Using this checkpoint from Hugging Face Transformers:
42
+
43
+ ```python
44
+ from transformers import VitsModel, AutoTokenizer
45
+ import torch
46
+
47
+ model = VitsModel.from_pretrained("Matthijs/mms-tts-eng")
48
+ tokenizer = AutoTokenizer.from_pretrained("Matthijs/mms-tts-eng")
49
+
50
+ text = "Hey, it's Hugging Face on the phone!"
51
+ inputs = tokenizer(text, return_tensors="pt")
52
+
53
+ with torch.no_grad():
54
+ output = model(**inputs).waveform
55
+
56
+ from IPython.display import Audio
57
+ Audio(output[0], rate=16000)
58
+ ```
59
+
60
+
61
+ ## BibTex citation
62
+
63
+ This model was developed by Vineel Pratap et al. from Meta AI. If you use the model, consider citing the MMS paper:
64
+
65
+ ```
66
+ @article{pratap2023mms,
67
+ title={Scaling Speech Technology to 1,000+ Languages},
68
+ author={Vineel Pratap and Andros Tjandra and Bowen Shi and Paden Tomasello and Arun Babu and Sayani Kundu and Ali Elkahky and Zhaoheng Ni and Apoorv Vyas and Maryam Fazel-Zarandi and Alexei Baevski and Yossi Adi and Xiaohui Zhang and Wei-Ning Hsu and Alexis Conneau and Michael Auli},
69
+ journal={arXiv},
70
+ year={2023}
71
+ }
72
+ ```
73
+
74
+ ## License
75
+
76
+ The model is licensed as **CC-BY-NC 4.0**.