Llama-VITS_data / README.md
xincan's picture
Update README.md
6733857 verified
metadata
license: mit
dataset_info:
  features:
    - name: version
      dtype: string
    - name: data
      list:
        - name: a
          dtype: int64
        - name: b
          dtype: float64
        - name: c
          dtype: string
        - name: d
          dtype: bool
  splits:
    - name: train
      num_bytes: 58
      num_examples: 1
  download_size: 2749
  dataset_size: 58
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
task_categories:
  - text-to-speech
language:
  - en

Dataset Card for Llama-VITS_data

The dataset repository contains data related with our work "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness", encapsulating:

  • Filtered dataset EmoV_DB_bea_sem
  • Filelists with semantic embeddings
  • Model checkpoints
  • Human evaluation templates

Dataset Details

Dataset Creation

We fileterd EmoV_DB_bea_sem dataset from EmoV_DB (Adigwe et al., 2018), a database of emotional speech containing data for male and female actors in English and French. EmoV_DB covers 5 emotion classes, amused, angry, disgusted, neutral, and sleepy. To factor out the effect of different speakers, we filtered the original EmoV_DB dataset into the speech of a specific female English speaker, bea. Then we use Llama2 to predict the emotion label of the transcript chosen from the above 5 emotion classes, and select the audio samples which has the same predicted emotion. The filtered dataset contains 22.8-minute records for training. We named the filtered dataset EmoV_DB_bea_sem and investigated how the semantic embeddings from Llama2 behave in naturalness and expressiveness on it. Please refer to our paper for more information.

Citation

If our work is useful to you, please cite our paper: "Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness".

@misc{feng2024llamavits,
      title={Llama-VITS: Enhancing TTS Synthesis with Semantic Awareness}, 
      author={Xincan Feng and Akifumi Yoshimoto},
      year={2024},
      eprint={2404.06714},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}