|
--- |
|
license: cc |
|
task_categories: |
|
- audio-to-audio |
|
- text-generation |
|
- audio-classification |
|
- video-classification |
|
language: |
|
- en |
|
size_categories: |
|
- 1K<n<10K |
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** https://multidialog.github.io |
|
- **Repository:** https://github.com/MultiDialog/MultiDialog |
|
- **Paper:** https://arxiv.org/abs/2106.06909 |
|
- **Point of Contact:** [jinny960812@kaist.ac.kr](mailto:jinny960812@kaist.ac.kr) |
|
- **Point of Contact:** [chaewonkim@kaist.ac.kr](mailto:chaewonkim@kaist.ac.kr) |
|
|
|
## Dataset Description |
|
This dataset includes manually annotated metadata linking audio files to transcriptions, emotions, and other attributes. |
|
|
|
### Example Usage |
|
There are 'train', 'test_freq', 'test_rare', 'valid_freq', and 'valid_rare' splits. Below is the example usage. |
|
```python |
|
from datasets import load_dataset |
|
|
|
MultiD = load_dataset("IVLLab/MultiDialog", "valid_freq", use_auth_token=True) |
|
|
|
# see structure |
|
print(MultiD) |
|
|
|
# load audio sample on the fly |
|
audio_input = MultiD["valid_freq"][0]["audio"] # first decoded audio sample |
|
transcription = MultiD["valid_freq"][0]["value"] # first transcription |
|
``` |
|
|
|
### Supported Tasks |
|
- `multimodal dialogue generation` |
|
- `automatic-speech-recognition`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). |
|
- `text-to-speech`: The dataset can also be used to train a model for Text-To-Speech (TTS). |
|
|
|
### Languages |
|
Multidialog contains audio and transcription data in English. |
|
|
|
## Dataset Structure |
|
### Data Instances |
|
```python |
|
{ |
|
'conv_id': 't_ffa55df6-114d-4b36-87a1-7af6b8b63d9b', |
|
'utterance_id': 0, |
|
'from': 'gpt', |
|
'audio': |
|
{ |
|
# in streaming mode 'path' will be 'xs_chunks_0000/YOU0000000315_S0000660.wav' |
|
'path': '/home/user/.cache/huggingface/datasets/downloads/extracted/9d48cf31/xs_chunks_0000/YOU0000000315_S0000660.wav', |
|
'array': array([0.0005188 , 0.00085449, 0.00012207, ..., 0.00125122, 0.00076294, 0.00036621], dtype=float32), |
|
'sampling_rate': 16000 |
|
}, |
|
'value': 'Are you a football fan?', |
|
'emotion': 'Neutral', |
|
'original_full_path': 'audio/youtube/P0004/YOU0000000315.opus' |
|
} |
|
``` |
|
|
|
### Data Fields |
|
* conv_id (string) - unique identifier for each conversation. |
|
* utterance_id (float) - uterrance index. |
|
* from (string) - who the message is from (human, gpt). |
|
* audio (Audio feature) - a dictionary containing the path to the audio, the decoded audio array, and the sampling rate. |
|
In non-streaming mode (default), the path point to the locally extracted audio. In streaming mode, the path is the relative path of an audio. |
|
segment inside its archive (as files are not downloaded and extracted locally). |
|
* value (string) - transcription of the utterance. |
|
* emotion (string) - the emotion of the utterance. |
|
* original_full_path (string) - the relative path to the original full audio sample in the original data directory. |
|
|
|
Emotion is assigned from the following labels: |
|
"Neutral", "Happy", "Fear", "Angry", "Disgusting", "Surprising", "Sad" |
|
|
|
|
|
|
|
|
|
|