File size: 3,164 Bytes
ddb99f9
 
6b1c63e
 
 
 
 
 
 
 
 
11029ca
 
 
 
 
3b4b3dd
6b1c63e
99cd4f2
6b1c63e
99cd4f2
 
 
6e08666
 
99cd4f2
 
e22656f
6b1c63e
99cd4f2
 
 
 
 
e22656f
99cd4f2
 
e22656f
99cd4f2
 
e22656f
 
99cd4f2
 
 
 
 
 
 
 
 
 
 
 
 
 
4abbc7b
 
 
99cd4f2
 
 
 
 
 
4abbc7b
 
 
99cd4f2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6b1c63e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: cc
task_categories:
- audio-to-audio
- text-generation
- audio-classification
- video-classification
language:
- en
size_categories:
- 1K<n<10K
# configs:
#   - config_name: default
#     data_files:
#       - split: test_freq
#         path: test_freq/*, metadata.jsonl
---

## Dataset Description

- **Homepage:** https://multidialog.github.io
- **Repository:** https://github.com/MultiDialog/MultiDialog
- **Paper:** https://arxiv.org/abs/2106.06909
- **Point of Contact:** [jinny960812@kaist.ac.kr](mailto:jinny960812@kaist.ac.kr)
- **Point of Contact:** [chaewonkim@kaist.ac.kr](mailto:chaewonkim@kaist.ac.kr)

## Dataset Description
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. 

### Example Usage
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage. 
```python
from datasets import load_dataset

MultiD = load_dataset("IVLLab/MultiDialog", "valid_freq", use_auth_token=True)

# see structure
print(MultiD)

# load audio sample on the fly
audio_input = MultiD["valid_freq"][0]["audio"]  # first decoded audio sample
transcription = MultiD["valid_freq"][0]["value"]  # first transcription
```

### Supported Tasks
- `multimodal dialogue generation` 
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). 
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS).

### Languages
Multidialog contains audio and transcription data in English.

## Dataset Structure
### Data Instances
```python
{
    'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b', 
    'utterance_id': 0,
    'from': 'gpt', 
    'audio': 
        {
            # in streaming mode 'path' will be 'xs_chunks_0000/YOU0000000315_S0000660.wav'
            'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/9d48cf31/xs_chunks_0000/YOU0000000315_S0000660.wav', 
            'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32), 
            'sampling_rate': 16000
        },
    'value': 'Are you a football fan?', 
    'emotion': 'Neutral', 
    'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus'
}
```

### Data Fields
* conv_id (string) - unique identifier for each conversation.
* utterance_id (float) -  uterrance index.
* from (string) - who the message is from (human, gpt).
* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate.
In non-streaming mode (default), the path point to the locally extracted audio. In streaming mode, the path is the relative path of an audio.
segment inside its archive (as files are not downloaded and extracted locally).
* value (string) - transcription of the utterance.
* emotion (string) - the emotion of the utterance.
* original_full_path (string) - the relative path to the original full audio sample in the original data directory.

Emotion is assigned from the following labels: 
"Neutral", "Happy", "Fear", "Angry", "Disgusting", "Surprising", "Sad"