File size: 816 Bytes
5b186f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Lingala Text-to-Speech

This model was trained on the OpenSLR's 71.6 hours aligned lingala bible dataset.

## Model description

A Conditional Variational Autoencoder with Adversarial Learning(VITS), which is an end-to-end approach to the text-to-speech task. To train the model, we used the espnet2 toolkit.


## Usage

First install espnet2
``` sh
pip install espnet
```
Download the model and the config files from this repo.
To generate a wav file using this model, run the following:
``` sh
from espnet2.bin.tts_inference import Text2Speech
import soundfile as sf

text2speech = Text2Speech(train_config="config.yaml",model_file="train.total_count.best.pth")
wav = text2speech("oyo kati na Ye ozwi lisiko mpe bolimbisi ya masumu")["wav"]
sf.write("outfile.wav", wav.numpy(), text2speech.fs, "PCM_16")

```