Datasets:
language:
- en
- de
- ja
configs:
- config_name: ENG_DEU
data_files:
- split: train
path: ENG_DEU/deu_train.csv
- split: test
path: ENG_DEU/deu_test.csv
- split: valid
path: ENG_DEU/deu_valid.csv
- config_name: ENG_JPN
data_files:
- split: train
path: ENG_JPN/jpn_train.csv
- split: test
path: ENG_JPN/jpn_test.csv
- split: valid
path: ENG_JPN/jpn_valid.csv
MELD-ST: An Emotion-aware Speech Translation Dataset
Overview
This emotion-aware speech translation dataset is a multi-language dataset extracted from the TV show "Friends." It includes English, Japanese, and German subtitles along with corresponding timestamps. This dataset is designed for natural language processing tasks.
Contents
The dataset is partitioned into train, test, and development subsets to streamline model training, validation, and evaluation, accompanied by audio files encoded in PCM format at a bitrate of 256 kb/s, with a sample rate of 16000 Hz, and mono channel setup.
The Structure of CSV files:
id, dia_id, utt_id, emotion, sentiment, English, German or Japanese (tgt_language), season, episode, speaker, English_begin_time, English_end_time, German_begin_time or Japanese_begin_time, German_end_time or Japanese_end_time.
Using the season, episode, begin time and end time of the subtitle, the corresponded audio files and video files. The audio files are already cut and saved. If you need them, please contact us by e-mail: yahata@nlp.ist.i.kyoto-u.ac.jp.
Directory tree structure
Each audio file's name is made using its ID from the CSV file. The directory tree structure is illustrated below:
MELD-ST
β
βββ ENG_DEU
β βββ deu_test.csv
β βββ deu_train.csv
β βββ deu_dev.csv
βββ ENG_JPN
β βββ jpn_test.csv
β βββ jpn_train.csv
β βββ jpn_dev.csv
βββ README.md
MELD-ST_audio
β
βββ ENG_DEU
β βββ DEU
β β βββ dev
β β β βββ dev_0.wav
β β β βββ dev_1.wav
β β βββ test
β β β βββ test_0.wav
β β β βββ test_1.wav
β β βββ train
β β β βββ train_0.wav
β β β βββ train_1.wav
β βββ ENG
β βββ dev
β β βββ dev_0.wav
β β βββ dev_1.wav
β βββ test
β β βββ test_0.wav
β β βββ test1.wav
β βββ train
β βββ train_0.wav
β βββ train_1.wav
βββ ENG_JPN
βββ ENG
β βββ dev
β β βββ dev_0.wav
β β βββ dev_1.wav
β βββ test
β β βββ test_0.wav
β β βββ test_1.wav
β βββ train
β βββ train_0.wav
β βββ train_1.wav
βββ JPN
βββ dev
β βββ dev_0.wav
β βββ dev_1.wav
βββ test
β βββ test_0.wav
β βββ test_1.wav
βββ train
βββ train_0.wav
βββ train_1.wav
Audio File length
The duration of the audio files in each set is displayed as follows:
utts | En speech (h) | Target speech (h) | |
---|---|---|---|
En-Ja | |||
Train | 8,069 | 6.4 | 6.1 |
Dev. | 1,008 | 0.8 | 0.5 |
Test | 1,008 | 0.7 | 0.8 |
En-De | |||
Train | 9,314 | 6.9 | 7.1 |
Dev. | 1,164 | 0.8 | 0.9 |
Test | 1,164 | 0.8 | 1.0 |
Emotion
Each subtitle line is annotated with emotional and sentiment labels, providing valuable additional information for fine-tuning and analysis. The distribution of the emotion labels is provided here.
Anger | Disgust | Fear | Sadness | Joy | Surprise | Neutral | ||
---|---|---|---|---|---|---|---|---|
En-Ja | Train | 12.18% | 2.95% | 2.59% | 7.47% | 15.91% | 11.35% | 47.54% |
Dev. | 11.81% | 2.18% | 3.27% | 8.23% | 17.46% | 9.50% | 47.52% | |
Test | 8.43% | 3.87% | 2.48% | 7.24% | 18.45% | 12.00% | 47.52% | |
En-De | Train | 11.76% | 2.80% | 2.49% | 7.04% | 16.87% | 11.77% | 47.26% |
Dev. | 10.91% | 2.15% | 2.66% | 8.51% | 17.35% | 11.17% | 47.25% | |
Test | 8.76% | 3.35% | 2.75% | 7.90% | 24.14% | 11.25% | 47.25% |
Limitations
While the MELD-ST Dataset offers valuable resources for natural language processing tasks, it also comes with certain limitations that users should be aware of:
- Imperfect Alignments: Some subtitle alignments may not be done perfectly, resulting in potential inaccuracies in timestamp synchronization between subtitles and audio/video content. Some Japanese sentences don't correspond to the audio.
Acknowledgment and Mitigation
Efforts have been made to address alignment issues where possible. However, users should exercise caution and consider these limitations when utilizing the dataset for their research or applications. Future updates may include improvements to alignment accuracy and additional quality assurance measures.
Notes on how to access the dataset
To gain access to this dataset, please send a request through huggingface and also send an e-mail to the authors (ge23zuh@mytum.de, {yahata,sshimizu}@nlp.ist.i.kyoto-u.ac.jp
) with the same e-mail address as the one you use for your huggingface account.
You need to agree to the following conditions:
- Do not re-distribute the dataset without our permission.
- The dataset contains copyrighted content, and we release this based on the concept of fair use of copyrighted materials.
- The dataset can only be used for research purposes. Any other use is explicitly prohibited.
Once we confirm your e-mail's identity and that you agree to these terms, we will grant access to the dataset.