The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: DataFilesNotFoundError Message: No (supported) data files found in xincan/Llama-VITS_data Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 73, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 347, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1904, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1885, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1270, in get_module module_name, default_builder_kwargs = infer_module_for_data_files( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 597, in infer_module_for_data_files raise DataFilesNotFoundError("No (supported) data files found" + (f" in {path}" if path else "")) datasets.exceptions.DataFilesNotFoundError: No (supported) data files found in xincan/Llama-VITS_data
Need help to make the dataset viewer work? Open a discussion for direct support.
Dataset Card for Llama-VITS_data
The dataset repository contains data related with our work "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness", encapsulating:
- Filtered dataset
EmoV_DB_bea_sem
- Filelists with semantic embeddings
- Model checkpoints
- Human evaluation templates
Dataset Details
- Paper: Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness
- Curated by: Xincan Feng, Akifumi Yoshimoto
- Funded by: CyberAgent Inc
- Repository: https://github.com/xincanfeng/vitsGPT
- Demo: https://xincanfeng.github.io/Llama-VITS_demo/
Dataset Creation
We fileterd EmoV_DB_bea_sem
dataset from EmoV_DB
(Adigwe et al., 2018), a database of emotional speech containing data for male and female actors in English and French. EmoV_DB covers 5 emotion classes, amused, angry, disgusted, neutral, and sleepy. To factor out the effect of different speakers, we filtered the original EmoV_DB dataset into the speech of a specific female English speaker, bea. Then we use Llama2 to predict the emotion label of the transcript chosen from the above 5 emotion classes, and select the audio samples which has the same predicted emotion.
The filtered dataset contains 22.8-minute records for training. We named the filtered dataset EmoV_DB_bea_sem
and investigated how the semantic embeddings from Llama2 behave in naturalness and expressiveness on it. Please refer to our paper for more information.
Citation
If our work is useful to you, please cite our paper: "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".
@misc{feng2024llamavits,
title={Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness},
author={Xincan Feng and Akifumi Yoshimoto},
year={2024},
eprint={2404.06714},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 0