patrickvonplaten commited on
Commit
afd1b94
1 Parent(s): 77593cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +173 -173
README.md CHANGED
@@ -1,173 +1,173 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- - crowdsourced
5
- - machine-generated
6
- language_creators:
7
- - crowdsourced
8
- - expert-generated
9
- languages:
10
- - en
11
- - en-GB
12
- - en-US
13
- - en-AU
14
- - fr
15
- - it
16
- - es
17
- - pt
18
- - de
19
- - nl
20
- - ru
21
- - pl
22
- - cs
23
- - ko
24
- - zh
25
- licenses:
26
- - cc-by-4.0
27
- multilinguality:
28
- - multilingual
29
- pretty_name: 'MInDS-14'
30
- size_categories:
31
- - 10K<n<100K
32
- task_categories:
33
- - automatic-speech-recognition
34
- - speech-processing
35
- task_ids:
36
- - speech-recognition
37
- ---
38
-
39
- # MInDS-14
40
-
41
- ## Dataset Description
42
-
43
- - **Fine-Tuning script:** [research-projects/xtreme-s](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
44
- - **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
45
- - **Total amount of disk used:** ca. 500 MB
46
-
47
- MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
48
- intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
49
-
50
- ## Example
51
-
52
- MInDS-14 can be downloaded and used as follows:
53
-
54
- ```py
55
- from datasets import load_dataset
56
-
57
- minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
58
- # to download all data for multi-lingual fine-tuning uncomment following line
59
- # minds_14 = load_dataset("PolyAI/all", "all")
60
-
61
- # see structure
62
- print(minds_14)
63
-
64
- # load audio sample on the fly
65
- audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
66
- intent_class = minds_14["train"][0]["intent_class"] # first transcription
67
- intent = minds_14["train"].features["intent_class"].names[intent_class]
68
-
69
- # use audio_input and language_class to fine-tune your model for audio classification
70
- ```
71
-
72
- ## Dataset Structure
73
-
74
- We show detailed information the example configurations `fr-FR` of the dataset.
75
- All other configurations have the same structure.
76
-
77
- ### Data Instances
78
-
79
- **fr-FR**
80
-
81
- - Size of downloaded dataset files: 471 MB
82
- - Size of the generated dataset: 300 KB
83
- - Total amount of disk used: 471 MB
84
-
85
-
86
- An example of a datainstance of the config `fr-FR` looks as follows:
87
-
88
- ```
89
- {
90
- "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
91
- "audio": {
92
- "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
93
- "array": array(
94
- [0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
95
- ),
96
- "sampling_rate": 8000,
97
- },
98
- "transcription": "je souhaite changer mon adresse",
99
- "english_transcription": "I want to change my address",
100
- "intent_class": 1,
101
- "lang_id": 6,
102
- }
103
- ```
104
-
105
- ### Data Fields
106
- The data fields are the same among all splits.
107
-
108
- - **path** (str): Path to the audio file
109
- - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
110
- - **transcription** (str): Transcription of the audio file
111
- - **english_transcription** (str): English transcription of the audio file
112
- - **intent_class** (int): Class id of intent
113
- - **lang_id** (int): Id of language
114
-
115
- ### Data Splits
116
- Every config only has the `"train"` split containing of *ca.* 600 examples.
117
-
118
- ## Dataset Creation
119
-
120
-
121
- ## Considerations for Using the Data
122
-
123
- ### Social Impact of Dataset
124
-
125
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
-
127
- ### Discussion of Biases
128
-
129
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
-
131
- ### Other Known Limitations
132
-
133
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
-
135
- ## Additional Information
136
-
137
- ### Dataset Curators
138
-
139
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
-
141
- ### Licensing Information
142
-
143
- All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
144
-
145
- ### Citation Information
146
-
147
- ```
148
- @article{DBLP:journals/corr/abs-2104-08524,
149
- author = {Daniela Gerz and
150
- Pei{-}Hao Su and
151
- Razvan Kusztos and
152
- Avishek Mondal and
153
- Michal Lis and
154
- Eshan Singhal and
155
- Nikola Mrksic and
156
- Tsung{-}Hsien Wen and
157
- Ivan Vulic},
158
- title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
159
- journal = {CoRR},
160
- volume = {abs/2104.08524},
161
- year = {2021},
162
- url = {https://arxiv.org/abs/2104.08524},
163
- eprinttype = {arXiv},
164
- eprint = {2104.08524},
165
- timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
166
- biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
167
- bibsource = {dblp computer science bibliography, https://dblp.org}
168
- }
169
- ```
170
-
171
- ### Contributions
172
-
173
- Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ - crowdsourced
5
+ - machine-generated
6
+ language_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ languages:
10
+ - en
11
+ - en-GB
12
+ - en-US
13
+ - en-AU
14
+ - fr
15
+ - it
16
+ - es
17
+ - pt
18
+ - de
19
+ - nl
20
+ - ru
21
+ - pl
22
+ - cs
23
+ - ko
24
+ - zh
25
+ licenses:
26
+ - cc-by-4.0
27
+ multilinguality:
28
+ - multilingual
29
+ pretty_name: 'MInDS-14'
30
+ size_categories:
31
+ - 10K<n<100K
32
+ task_categories:
33
+ - automatic-speech-recognition
34
+ - speech-processing
35
+ task_ids:
36
+ - speech-recognition
37
+ ---
38
+
39
+ # MInDS-14
40
+
41
+ ## Dataset Description
42
+
43
+ - **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
44
+ - **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
45
+ - **Total amount of disk used:** ca. 500 MB
46
+
47
+ MINDS-14 is training and evaluation resource for intent detection task with spoken data. It covers 14
48
+ intents extracted from a commercial system in the e-banking domain, associated with spoken examples in 14 diverse language varieties.
49
+
50
+ ## Example
51
+
52
+ MInDS-14 can be downloaded and used as follows:
53
+
54
+ ```py
55
+ from datasets import load_dataset
56
+
57
+ minds_14 = load_dataset("PolyAI/minds14", "fr-FR") # for French
58
+ # to download all data for multi-lingual fine-tuning uncomment following line
59
+ # minds_14 = load_dataset("PolyAI/all", "all")
60
+
61
+ # see structure
62
+ print(minds_14)
63
+
64
+ # load audio sample on the fly
65
+ audio_input = minds_14["train"][0]["audio"] # first decoded audio sample
66
+ intent_class = minds_14["train"][0]["intent_class"] # first transcription
67
+ intent = minds_14["train"].features["intent_class"].names[intent_class]
68
+
69
+ # use audio_input and language_class to fine-tune your model for audio classification
70
+ ```
71
+
72
+ ## Dataset Structure
73
+
74
+ We show detailed information the example configurations `fr-FR` of the dataset.
75
+ All other configurations have the same structure.
76
+
77
+ ### Data Instances
78
+
79
+ **fr-FR**
80
+
81
+ - Size of downloaded dataset files: 471 MB
82
+ - Size of the generated dataset: 300 KB
83
+ - Total amount of disk used: 471 MB
84
+
85
+
86
+ An example of a datainstance of the config `fr-FR` looks as follows:
87
+
88
+ ```
89
+ {
90
+ "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
91
+ "audio": {
92
+ "path": "/home/patrick/.cache/huggingface/datasets/downloads/extracted/3ebe2265b2f102203be5e64fa8e533e0c6742e72268772c8ac1834c5a1a921e3/fr-FR~ADDRESS/response_4.wav",
93
+ "array": array(
94
+ [0.0, 0.0, 0.0, ..., 0.0, 0.00048828, -0.00024414], dtype=float32
95
+ ),
96
+ "sampling_rate": 8000,
97
+ },
98
+ "transcription": "je souhaite changer mon adresse",
99
+ "english_transcription": "I want to change my address",
100
+ "intent_class": 1,
101
+ "lang_id": 6,
102
+ }
103
+ ```
104
+
105
+ ### Data Fields
106
+ The data fields are the same among all splits.
107
+
108
+ - **path** (str): Path to the audio file
109
+ - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
110
+ - **transcription** (str): Transcription of the audio file
111
+ - **english_transcription** (str): English transcription of the audio file
112
+ - **intent_class** (int): Class id of intent
113
+ - **lang_id** (int): Id of language
114
+
115
+ ### Data Splits
116
+ Every config only has the `"train"` split containing of *ca.* 600 examples.
117
+
118
+ ## Dataset Creation
119
+
120
+
121
+ ## Considerations for Using the Data
122
+
123
+ ### Social Impact of Dataset
124
+
125
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
126
+
127
+ ### Discussion of Biases
128
+
129
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
130
+
131
+ ### Other Known Limitations
132
+
133
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
134
+
135
+ ## Additional Information
136
+
137
+ ### Dataset Curators
138
+
139
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
140
+
141
+ ### Licensing Information
142
+
143
+ All datasets are licensed under the [Creative Commons license (CC-BY)](https://creativecommons.org/licenses/).
144
+
145
+ ### Citation Information
146
+
147
+ ```
148
+ @article{DBLP:journals/corr/abs-2104-08524,
149
+ author = {Daniela Gerz and
150
+ Pei{-}Hao Su and
151
+ Razvan Kusztos and
152
+ Avishek Mondal and
153
+ Michal Lis and
154
+ Eshan Singhal and
155
+ Nikola Mrksic and
156
+ Tsung{-}Hsien Wen and
157
+ Ivan Vulic},
158
+ title = {Multilingual and Cross-Lingual Intent Detection from Spoken Data},
159
+ journal = {CoRR},
160
+ volume = {abs/2104.08524},
161
+ year = {2021},
162
+ url = {https://arxiv.org/abs/2104.08524},
163
+ eprinttype = {arXiv},
164
+ eprint = {2104.08524},
165
+ timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
166
+ biburl = {https://dblp.org/rec/journals/corr/abs-2104-08524.bib},
167
+ bibsource = {dblp computer science bibliography, https://dblp.org}
168
+ }
169
+ ```
170
+
171
+ ### Contributions
172
+
173
+ Thanks to [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset