File size: 12,331 Bytes
5b51c67 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 95db25e 8080330 5b51c67 8080330 5b51c67 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 |
## 📄 About
Natural and efficient TTS in Catalan: using Matcha-TTS with the Catalan language.
Here you'll be able to find all the information regarding our model, which has been trained with the use of deep learning. If you want specific information on how to train the model you can find it [here](https://huggingface.co/BSC-LT/matcha-tts-cat-multispeaker). The code we've used is also on Github [here](https://github.com/langtech-bsc/Matcha-TTS/tree/dev-cat).
## Table of Contents
<details>
<summary>Click to expand</summary>
- [General Model Description](#general-model-description)
- [Adaptation to Catalan](#adaptation-to-catalan)
- [Intended Uses and Limitations](#intended-uses-and-limitations)
- [Samples](#samples)
- [Citation](#citation)
- [Additional Information](#additional-information)
</details>
## General Model Description
**Matcha-TTS** is an encoder-decoder architecture designed for fast acoustic modelling in TTS.
On the one hand, the encoder part is based on a text encoder and a phoneme duration prediction. Together, they predict averaged acoustic features.
On the other hand, the decoder has essentially a U-Net backbone inspired by [Grad-TTS](https://arxiv.org/pdf/2105.06337.pdf), which is based on the Transformer architecture.
In the latter, by replacing 2D CNNs by 1D CNNs, a large reduction in memory consumption and fast synthesis is achieved.
**Matcha-TTS** is a non-autorregressive model trained with optimal-transport conditional flow matching (OT-CFM).
This yields an ODE-based decoder capable of generating high output quality in fewer synthesis steps than models trained using score matching.
## Adaptation to Catalan
The original Matcha-TTS model excels in English, but to bring its capabilities to Catalan, a multi-step process was undertaken. Firstly, we fine-tuned the model from English to Catalan central, which laid the groundwork for understanding the language's nuances. This first fine-tuning was done using two datasets:
* [Our version of the openslr-slr69 dataset.](https://huggingface.co/datasets/projecte-aina/openslr-slr69-ca-trimmed-denoised)
* A studio-recorded dataset of central catalan, which will soon be published.
This soon to be published dataset also included recordings of three different dialects:
* Valencian
* Occidental
* Balear
With a male and a female speaker for each dialect.
Then, through fine-tuning for these specific Catalan dialects, the model adapted to regional variations in pronunciation and cadence. This meticulous approach ensures that the model reflects the linguistic richness and cultural diversity within the Catalan-speaking community, offering seamless communication in previously underserved dialects.
In addition to training the Matcha-TTS model for Catalan, integrating the eSpeak phonemizer played a crucial role in enhancing the naturalness and accuracy of generated speech. A TTS (Text-to-Speech) system comprises several components, each contributing to the overall quality of synthesized speech. The first component involves text preprocessing, where the input text undergoes normalization and linguistic analysis to identify words, punctuation, and linguistic features. Next, the text is converted into phonemes, the smallest units of sound in a language, through a process called phonemization. This step is where the eSpeak phonemizer shines, as it accurately converts Catalan text into phonetic representations, capturing the subtle nuances of pronunciation specific to Catalan. You can find the espeak version we used [here](https://github.com/projecte-aina/espeak-ng/tree/dev-ca).
After phonemization, the phonemes are passed to the synthesis component, where they are transformed into audible speech. Here, the Matcha-TTS model takes center stage, generating high-quality speech output based on the phonetic input. The model's training, fine-tuning, and adaptation to Catalan ensure that the synthesized speech retains the natural rhythm, intonation, and pronunciation patterns of the language, thereby enhancing the overall user experience.
Finally, the synthesized speech undergoes post-processing, where prosodic features such as pitch, duration, and emphasis are applied to further refine the output and make it sound more natural and expressive. By integrating the eSpeak phonemizer into the TTS pipeline and adapting it for Catalan, alongside training the Matcha-TTS model for the language, we have created a comprehensive and effective system for generating high-quality Catalan speech. This combination of advanced techniques and meticulous attention to linguistic detail is instrumental in bridging language barriers and facilitating communication for Catalan speakers worldwide.
## Intended Uses and Limitations
This model is intended to serve as an acoustic feature generator for multispeaker text-to-speech systems for the Catalan language.
It has been finetuned using a Catalan phonemizer, therefore if the model is used for other languages it may will not produce intelligible samples after mapping
its output into a speech waveform.
The quality of the samples can vary depending on the speaker.
This may be due to the sensitivity of the model in learning specific frequencies and also due to the quality of samples for each speaker.
## Samples
* Female samples
<div class="table-wrapper">
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Valencian</td>
<th class="tg-0pky">Occidental</td>
<th class="tg-0pky">Balear</td>
<tr>
<thead>
<tbody>
<tr>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk1/0.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk1/0.wav" type="audio/wav"">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk1/0.wav" type="audio/wav">
</audio>
</td>
</tr>
<tr>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk1/1.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk1/1.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk1/1.wav" type="audio/wav">
</audio>
</td>
</tr>
<tr>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk1/2.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk1/2.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk1/2.wav" type="audio/wav">
</audio>
</td>
</tr>
</tbody>
</table>
</div>
* Male samples:
<div class="table-wrapper">
<table class="tg">
<thead>
<tr>
<th class="tg-0pky">Valencian</td>
<th class="tg-0pky">Occidental</td>
<th class="tg-0pky">Balear</td>
<tr>
<thead>
<tbody>
<tr>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk0/0.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk0/0.wav" type="audio/wav"">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk0/0.wav" type="audio/wav">
</audio>
</td>
</tr>
<tr>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk0/1.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk0/1.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk0/1.wav" type="audio/wav">
</audio>
</td>
</tr>
<tr>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/valencia/spk0/2.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/occidental/spk0/2.wav" type="audio/wav">
</audio>
</td>
<td>
<audio controls="" preload="none" style="width: 200px">
audio not supported
<source src="https://github.com/mllopartbsc/assets/raw/c6a393237e712851dd7cc7d10c70dde29d3412ac/matcha_tts_catalan/balear/spk0/2.wav" type="audio/wav">
</audio>
</td>
</tr>
</tbody>
</table>
</div>
## Citation
If this code contributes to your research, please cite the work:
```
@misc{mehta2024matchatts,
title={Matcha-TTS: A fast TTS architecture with conditional flow matching},
author={Shivam Mehta and Ruibo Tu and Jonas Beskow and Éva Székely and Gustav Eje Henter},
year={2024},
eprint={2309.03199},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
## Additional Information
### Author
The Language Technologies Unit from Barcelona Supercomputing Center.
### Contact
For further information, please send an email to <langtech@bsc.es>.
### Copyright
Copyright(c) 2023 by Language Technologies Unit, Barcelona Supercomputing Center.
### License
[Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).
|