File size: 2,294 Bytes
9c86155
 
06a4abd
3a8ff8c
 
9c86155
3502c7a
 
9c86155
 
 
 
1647edb
 
 
275f3e0
5e19424
8812d27
 
49fc4a4
3502c7a
ab45ce0
 
8812d27
 
 
 
 
 
 
ab45ce0
06a4abd
1647edb
 
 
 
 
8812d27
1647edb
 
 
 
 
8812d27
1647edb
 
 
06a4abd
1647edb
ab45ce0
eb8c82c
 
06a4abd
e80835d
 
3502c7a
7af6095
e80835d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
---
title: "Ukrainian TTS"
emoji: 🐌
colorFrom: blue
colorTo: yellow
sdk: gradio
sdk_version : 3.3
python_version: 3.9
app_file: app.py
pinned: false
---

# Ukrainian TTS πŸ“’πŸ€–
Ukrainian TTS (text-to-speech) using Coqui TTS.

Link to online demo -> [https://huggingface.co/spaces/robinhad/ukrainian-tts](https://huggingface.co/spaces/robinhad/ukrainian-tts)
Link to source code and models -> [https://github.com/robinhad/ukrainian-tts](https://github.com/robinhad/ukrainian-tts)

Code is licensed under `MIT License`, models are under `GNU GPL v3 License`. 
# Support
If you like my work, please support -> [https://send.monobank.ua/jar/48iHq4xAXm](https://send.monobank.ua/jar/48iHq4xAXm)
# Example

`Mykyta (male)`:

https://user-images.githubusercontent.com/5759207/178158485-29a5d496-7eeb-4938-8ea7-c345bc9fed57.mp4

`Olena (female)`:

https://user-images.githubusercontent.com/5759207/178158492-8504080e-2f13-43f1-83f0-489b1f9cd66b.mp4

# How to use:
1. `pip install -r requirements.txt`.
2. Download model from "Releases" tab.
3. Launch as one-time command:  
```
tts --text "Text for TTS" \
    --model_path path/to/model.pth \
    --config_path path/to/config.json \
    --out_path folder/to/save/output.wav
```
or alternatively launch web server using:
```
tts-server --model_path path/to/model.pth \
    --config_path path/to/config.json
```

# How to train: πŸ‹οΈ
1. Refer to ["Nervous beginner guide"](https://tts.readthedocs.io/en/latest/tutorial_for_nervous_beginners.html) in Coqui TTS docs.
2. Instead of provided `config.json` use one from this repo.


# Attribution 🀝

- Model training - [Yurii Paniv @robinhad](https://github.com/robinhad)   
- Mykyta, Olena and Lada dataset - [Yehor Smoliakov @egorsmkv](https://github.com/egorsmkv)   
- Silence cutting using [HMM-GMM](https://github.com/proger/uk) - [Volodymyr Kyrylov @proger](https://github.com/proger)  
- Autostress (with dictionary) using [ukrainian-word-stress](https://github.com/lang-uk/ukrainian-word-stress) - [Oleksiy Syvokon @asivokon](https://github.com/asivokon)    
- Autostress (with model) using [ukrainian-accentor](https://github.com/egorsmkv/ukrainian-accentor) - [Bohdan Mykhailenko @NeonBohdan](https://github.com/NeonBohdan) + [Yehor Smoliakov @egorsmkv](https://github.com/egorsmkv)