Update README.md
Browse files
README.md
CHANGED
@@ -25,4 +25,38 @@ configs:
|
|
25 |
data_files:
|
26 |
- split: train
|
27 |
path: data/train-*
|
|
|
|
|
|
|
|
|
28 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
data_files:
|
26 |
- split: train
|
27 |
path: data/train-*
|
28 |
+
task_categories:
|
29 |
+
- text-to-speech
|
30 |
+
language:
|
31 |
+
- en
|
32 |
---
|
33 |
+
# Dataset Card for Dataset Name
|
34 |
+
|
35 |
+
The dataset repository includes the filtered dataset `EmoV_DB_bea_sem`, the filelists with semantic embeddings, and the model checkpoints that used in our work "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".
|
36 |
+
|
37 |
+
## Dataset Details
|
38 |
+
|
39 |
+
- **Paper:** Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness
|
40 |
+
- **Curated by:** Xincan Feng, Akifumi Yoshimoto
|
41 |
+
- **Funded by:** CyberAgent Inc
|
42 |
+
- **Repository:** https://github.com/xincanfeng/vitsGPT
|
43 |
+
- **Demo:** https://xincanfeng.github.io/Llama-VITS_demo/
|
44 |
+
|
45 |
+
## Dataset Creation
|
46 |
+
|
47 |
+
We fileterd `EmoV_DB_bea_sem` dataset from EmoV_DB (Adigwe et al., 2018), which is a database of emotional speech that contains data for male and female actors in English and French. EmoV_DB covers 5 emotion classes, amused, angry, disgusted, neutral, and sleepy. To factor out the effect of different speakers, we filtered the original EmoV_DB dataset into the speech of a specific female English speaker, bea. Then we use Llama2 to predict the emotion label of the transcript chosen from the above 5 emotion classes, and select the audio samples which has the same predicted emotion.
|
48 |
+
The filtered dataset contains 22.8-min records for training. We named the filtered dataset `EmoV_DB_bea_sem` and investigated how the semantic embeddings from Llama2 behave in naturalness and expressiveness on it. Please refer to our paper for more information.
|
49 |
+
|
50 |
+
## Citation
|
51 |
+
|
52 |
+
If our work is useful to you, please cite our paper: "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".
|
53 |
+
```sh
|
54 |
+
@misc{feng2024llamavits,
|
55 |
+
title={Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness},
|
56 |
+
author={Xincan Feng and Akifumi Yoshimoto},
|
57 |
+
year={2024},
|
58 |
+
eprint={2404.06714},
|
59 |
+
archivePrefix={arXiv},
|
60 |
+
primaryClass={cs.CL}
|
61 |
+
}
|
62 |
+
```
|