|
# Lingala Text-to-Speech |
|
|
|
This model was trained on the OpenSLR's 71.6 hours aligned lingala bible dataset. |
|
|
|
## Model description |
|
|
|
A Conditional Variational Autoencoder with Adversarial Learning(VITS), which is an end-to-end approach to the text-to-speech task. To train the model, we used the espnet2 toolkit. |
|
|
|
|
|
## Usage |
|
|
|
First install espnet2 |
|
``` sh |
|
pip install espnet |
|
``` |
|
Download the model and the config files from this repo. |
|
To generate a wav file using this model, run the following: |
|
``` sh |
|
from espnet2.bin.tts_inference import Text2Speech |
|
import soundfile as sf |
|
|
|
text2speech = Text2Speech(train_config="config.yaml",model_file="train.total_count.best.pth") |
|
wav = text2speech("oyo kati na Ye ozwi lisiko mpe bolimbisi ya masumu")["wav"] |
|
sf.write("outfile.wav", wav.numpy(), text2speech.fs, "PCM_16") |
|
|
|
``` |
|
|
|
|
|
|
|
|