--- license: apache-2.0 datasets: - projecte-aina/festcat_trimmed_denoised - projecte-aina/openslr-slr69-ca-trimmed-denoised - lj_speech - blabble-io/libritts_r tags: - vocoder - mel - vocos - hifigan - tts --- # Vocos-mel-22khz ## Model Details ### Model Description **Vocos** is a fast neural vocoder designed to synthesize audio waveforms from acoustic features. Unlike other typical GAN-based vocoders, Vocos does not model audio samples in the time domain. Instead, it generates spectral coefficients, facilitating rapid audio reconstruction through inverse Fourier transform. This version of vocos uses 80-bin mel spectrograms as acoustic features which are widespread in the TTS domain since the introduction of [hifi-gan](https://github.com/jik876/hifi-gan/blob/master/meldataset.py) The goal of this model is to provide an alternative to hifi-gan that is faster and compatible with the acoustic output of several TTS models. We are grateful with the authors for open sourcing the code allowing us to modify and train this version. ## Intended Uses and limitations The model is aimed to serve as a vocoder to synthesize audio waveforms from mel spectrograms. Is trained to generate speech and if is used in other audio domain is possible that the model won't produce high quality samples. ## How to Get Started with the Model Use the code below to get started with the model. ### Installation To use Vocos only in inference mode, install it using: ```bash pip install git+https://github.com/langtech-bsc/vocos.git@matcha ``` ### Reconstruct audio from mel-spectrogram ```python import torch from vocos import Vocos vocos = Vocos.from_pretrained("BSC-LT/vocos-mel-22khz") mel = torch.randn(1, 80, 256) # B, C, T audio = vocos.decode(mel) ``` ### Integrate with existing TTS models: * Matcha-TTS Open In Colab * Fastpitch Open In Colab ### Copy-synthesis from a file: ```python import torchaudio y, sr = torchaudio.load(YOUR_AUDIO_FILE) if y.size(0) > 1: # mix to mono y = y.mean(dim=0, keepdim=True) y = torchaudio.functional.resample(y, orig_freq=sr, new_freq=22050) y_hat = vocos(y) ``` ### Onnx We also release a onnx version of the model, you can check in colab: Open In Colab ## Training Details ### Training Data The model was trained on 4 speech datasets | Dataset | Language | Hours | |---------------------|----------|---------| | LibriTTS-r | en | 585 | | LJSpeech | en | 24 | | Festcat | ca | 22 | | OpenSLR69 | ca | 5 | ### Training Procedure The model was trained for 1.8M steps and 183 epochs with a batch size of 16 for stability. We used a Cosine scheduler with a initial learning rate of 5e-4. We also modified the mel spectrogram loss to use 128 bins and fmax of 11025 instead of the same input mel spectrogram. #### Training Hyperparameters * initial_learning_rate: 5e-4 * scheduler: cosine without warmup or restarts * mel_loss_coeff: 45 * mrd_loss_coeff: 0.1 * batch_size: 16 * num_samples: 16384 ## Evaluation Evaluation was done using the metrics on the original repo, after 183 epochs we achieve: * val_loss: 3.81 * f1_score: 0.94 * mel_loss: 0.25 * periodicity_loss:0.132 * pesq_score: 3.16 * pitch_loss: 38.11 * utmos_score: 3.27 ## Citation If this code contributes to your research, please cite the work: ``` @article{siuzdak2023vocos, title={Vocos: Closing the gap between time-domain and Fourier-based neural vocoders for high-quality audio synthesis}, author={Siuzdak, Hubert}, journal={arXiv preprint arXiv:2306.00814}, year={2023} } ``` ## Additional information ### Author The Language Technologies Unit from Barcelona Supercomputing Center. ### Contact For further information, please send an email to . ### Copyright Copyright(c) 2024 by Language Technologies Unit, Barcelona Supercomputing Center. ### License [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project](https://projecteaina.cat/).