Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

MELD-ST: An Emotion-aware Speech Translation Dataset

Overview

This emotion-aware speech translation dataset is a multi-language dataset extracted from the TV show "Friends." It includes English, Japanese, and German subtitles along with corresponding timestamps. This dataset is designed for natural language processing tasks.

Contents

The dataset is partitioned into train, test, and development subsets to streamline model training, validation, and evaluation, accompanied by audio files encoded in PCM format at a bitrate of 256 kb/s, with a sample rate of 16000 Hz, and mono channel setup.

The Structure of CSV files:

id, dia_id, utt_id, emotion, sentiment, English, German or Japanese (tgt_language), season, episode, speaker, English_begin_time, English_end_time, German_begin_time or Japanese_begin_time, German_end_time or Japanese_end_time.

Using the season, episode, begin time and end time of the subtitle, the corresponded audio files and video files. The audio files are already cut and saved. If you need them, please contact us by e-mail: yahata@nlp.ist.i.kyoto-u.ac.jp.

Directory tree structure

Each audio file's name is made using its ID from the CSV file. The directory tree structure is illustrated below:

MELD-ST
β”‚
β”œβ”€β”€ ENG_DEU
β”‚   β”œβ”€β”€ deu_test.csv
β”‚   β”œβ”€β”€ deu_train.csv
β”‚   └── deu_dev.csv
β”œβ”€β”€ ENG_JPN
β”‚   β”œβ”€β”€ jpn_test.csv
β”‚   β”œβ”€β”€ jpn_train.csv
β”‚   └── jpn_dev.csv
└── README.md

MELD-ST_audio
β”‚
β”œβ”€β”€ ENG_DEU
β”‚   β”œβ”€β”€ DEU
β”‚   β”‚   β”œβ”€β”€ dev
β”‚   β”‚   β”‚    β”œβ”€β”€ dev_0.wav
β”‚   β”‚   β”‚    β”œβ”€β”€ dev_1.wav
β”‚   β”‚   β”œβ”€β”€ test
β”‚   β”‚   β”‚    β”œβ”€β”€ test_0.wav
β”‚   β”‚   β”‚    β”œβ”€β”€ test_1.wav
β”‚   β”‚   └── train
β”‚   β”‚   β”‚    β”œβ”€β”€ train_0.wav
β”‚   β”‚   β”‚    β”œβ”€β”€ train_1.wav
β”‚   └── ENG
β”‚       β”œβ”€β”€ dev
β”‚       β”‚    β”œβ”€β”€ dev_0.wav
β”‚       β”‚    β”œβ”€β”€ dev_1.wav
β”‚       β”œβ”€β”€ test
β”‚       β”‚    β”œβ”€β”€ test_0.wav
β”‚       β”‚    β”œβ”€β”€ test1.wav
β”‚       └── train
β”‚            β”œβ”€β”€ train_0.wav
β”‚            β”œβ”€β”€ train_1.wav
└── ENG_JPN
    β”œβ”€β”€ ENG
    β”‚   β”œβ”€β”€ dev
    β”‚   β”‚    β”œβ”€β”€ dev_0.wav
    β”‚   β”‚    β”œβ”€β”€ dev_1.wav
    β”‚   β”œβ”€β”€ test
    β”‚   β”‚    β”œβ”€β”€ test_0.wav
    β”‚   β”‚    β”œβ”€β”€ test_1.wav
    β”‚   └── train
    β”‚        β”œβ”€β”€ train_0.wav
    β”‚        β”œβ”€β”€ train_1.wav
    └── JPN
        β”œβ”€β”€ dev
        β”‚    β”œβ”€β”€ dev_0.wav
        β”‚    β”œβ”€β”€ dev_1.wav
        β”œβ”€β”€ test
        β”‚    β”œβ”€β”€ test_0.wav
        β”‚    β”œβ”€β”€ test_1.wav
        └── train
             β”œβ”€β”€ train_0.wav
             β”œβ”€β”€ train_1.wav

Audio File length

The duration of the audio files in each set is displayed as follows:

utts En speech (h) Target speech (h)
En-Ja
Train 8,069 6.4 6.1
Dev. 1,008 0.8 0.5
Test 1,008 0.7 0.8
En-De
Train 9,314 6.9 7.1
Dev. 1,164 0.8 0.9
Test 1,164 0.8 1.0

Emotion

Each subtitle line is annotated with emotional and sentiment labels, providing valuable additional information for fine-tuning and analysis. The distribution of the emotion labels is provided here.

Anger Disgust Fear Sadness Joy Surprise Neutral
En-Ja Train 12.18% 2.95% 2.59% 7.47% 15.91% 11.35% 47.54%
Dev. 11.81% 2.18% 3.27% 8.23% 17.46% 9.50% 47.52%
Test 8.43% 3.87% 2.48% 7.24% 18.45% 12.00% 47.52%
En-De Train 11.76% 2.80% 2.49% 7.04% 16.87% 11.77% 47.26%
Dev. 10.91% 2.15% 2.66% 8.51% 17.35% 11.17% 47.25%
Test 8.76% 3.35% 2.75% 7.90% 24.14% 11.25% 47.25%

Limitations

While the MELD-ST Dataset offers valuable resources for natural language processing tasks, it also comes with certain limitations that users should be aware of:

  • Imperfect Alignments: Some subtitle alignments may not be done perfectly, resulting in potential inaccuracies in timestamp synchronization between subtitles and audio/video content. Some Japanese sentences don't correspond to the audio.

Acknowledgment and Mitigation

Efforts have been made to address alignment issues where possible. However, users should exercise caution and consider these limitations when utilizing the dataset for their research or applications. Future updates may include improvements to alignment accuracy and additional quality assurance measures.

Notes on how to access the dataset

To gain access to this dataset, please send a request through huggingface and also send an e-mail to the authors (ge23zuh@mytum.de, {yahata,sshimizu}@nlp.ist.i.kyoto-u.ac.jp) with the same e-mail address as the one you use for your huggingface account. You need to agree to the following conditions:

  • Do not re-distribute the dataset without our permission.
  • The dataset contains copyrighted content, and we release this based on the concept of fair use of copyrighted materials.
  • The dataset can only be used for research purposes. Any other use is explicitly prohibited.

Once we confirm your e-mail's identity and that you agree to these terms, we will grant access to the dataset.

Downloads last month
37