File size: 2,295 Bytes
7dbb824 103ce80 2154f18 7dbb824 2154f18 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license: mit
dataset_info:
features:
- name: version
dtype: string
- name: data
list:
- name: a
dtype: int64
- name: b
dtype: float64
- name: c
dtype: string
- name: d
dtype: bool
splits:
- name: train
num_bytes: 58
num_examples: 1
download_size: 2749
dataset_size: 58
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- text-to-speech
language:
- en
---
# Dataset Card for Dataset Name
The dataset repository includes the filtered dataset `EmoV_DB_bea_sem`, the filelists with semantic embeddings, and the model checkpoints that used in our work "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".
## Dataset Details
- **Paper:** Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness
- **Curated by:** Xincan Feng, Akifumi Yoshimoto
- **Funded by:** CyberAgent Inc
- **Repository:** https://github.com/xincanfeng/vitsGPT
- **Demo:** https://xincanfeng.github.io/Llama-VITS_demo/
## Dataset Creation
We fileterd `EmoV_DB_bea_sem` dataset from EmoV_DB (Adigwe et al., 2018), which is a database of emotional speech that contains data for male and female actors in English and French. EmoV_DB covers 5 emotion classes, amused, angry, disgusted, neutral, and sleepy. To factor out the effect of different speakers, we filtered the original EmoV_DB dataset into the speech of a specific female English speaker, bea. Then we use Llama2 to predict the emotion label of the transcript chosen from the above 5 emotion classes, and select the audio samples which has the same predicted emotion.
The filtered dataset contains 22.8-min records for training. We named the filtered dataset `EmoV_DB_bea_sem` and investigated how the semantic embeddings from Llama2 behave in naturalness and expressiveness on it. Please refer to our paper for more information.
## Citation
If our work is useful to you, please cite our paper: "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".
```sh
@misc{feng2024llamavits,
title={Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness},
author={Xincan Feng and Akifumi Yoshimoto},
year={2024},
eprint={2404.06714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |