system HF staff commited on
Commit
bd1467e
0 Parent(s):

Update files from the datasets library (from 1.6.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.6.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,285 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ dihana:
8
+ - es
9
+ ilisten:
10
+ - it
11
+ loria:
12
+ - fr
13
+ maptask:
14
+ - en
15
+ vm2:
16
+ - de
17
+ licenses:
18
+ - cc-by-sa-4-0
19
+ multilinguality:
20
+ - multilingual
21
+ size_categories:
22
+ - 10K<n<100K
23
+ source_datasets:
24
+ - original
25
+ task_categories:
26
+ - sequence-modeling
27
+ - text-classification
28
+ task_ids:
29
+ dihana:
30
+ - dialogue-modeling
31
+ - language-modeling
32
+ - text-classification-other-dialogue-act-classification
33
+ ilisten:
34
+ - dialogue-modeling
35
+ - language-modeling
36
+ - text-classification-other-dialogue-act-classification
37
+ loria:
38
+ - dialogue-modeling
39
+ - language-modeling
40
+ - text-classification-other-dialogue-act-classification
41
+ maptask:
42
+ - dialogue-modeling
43
+ - language-modeling
44
+ - text-classification-other-dialogue-act-classification
45
+ vm2:
46
+ - dialogue-modeling
47
+ - language-modeling
48
+ - text-classification-other-dialogue-act-classification
49
+ ---
50
+
51
+ # Dataset Card for MIAM
52
+
53
+ ## Table of Contents
54
+ - [Dataset Card for MIAM](#dataset-card-for-miam)
55
+ - [Table of Contents](#table-of-contents)
56
+ - [Dataset Description](#dataset-description)
57
+ - [Dataset Summary](#dataset-summary)
58
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
59
+ - [Languages](#languages)
60
+ - [Dataset Structure](#dataset-structure)
61
+ - [Data Instances](#data-instances)
62
+ - [Dihana Corpus](#dihana-corpus)
63
+ - [iLISTEN Corpus](#ilisten-corpus)
64
+ - [LORIA Corpus](#loria-corpus)
65
+ - [HCRC MapTask Corpus](#hcrc-maptask-corpus)
66
+ - [VERBMOBIL](#verbmobil)
67
+ - [Data Fields](#data-fields)
68
+ - [Data Splits](#data-splits)
69
+ - [Dataset Creation](#dataset-creation)
70
+ - [Curation Rationale](#curation-rationale)
71
+ - [Source Data](#source-data)
72
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
73
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
74
+ - [Annotations](#annotations)
75
+ - [Annotation process](#annotation-process)
76
+ - [Who are the annotators?](#who-are-the-annotators)
77
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
78
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
79
+ - [Social Impact of Dataset](#social-impact-of-dataset)
80
+ - [Discussion of Biases](#discussion-of-biases)
81
+ - [Other Known Limitations](#other-known-limitations)
82
+ - [Additional Information](#additional-information)
83
+ - [Benchmark Curators](#benchmark-curators)
84
+ - [Licensing Information](#licensing-information)
85
+ - [Citation Information](#citation-information)
86
+
87
+ ## Dataset Description
88
+
89
+ - **Homepage:** [N/A]
90
+ - **Repository:** [N/A]
91
+ - **Paper:** [N/A]
92
+ - **Leaderboard:** [N/A]
93
+ - **Point of Contact:** [N/A]
94
+
95
+ ### Dataset Summary
96
+
97
+ Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and
98
+ analyzing natural language understanding systems specifically designed for spoken language. Datasets
99
+ are in English, French, German, Italian and Spanish. They cover a variety of domains including
100
+ spontaneous speech, scripted scenarios, and joint task completion. All datasets contain dialogue act
101
+ labels.
102
+
103
+ ### Supported Tasks and Leaderboards
104
+
105
+ [More Information Needed]
106
+
107
+ ### Languages
108
+
109
+ English, French, German, Italian, Spanish.
110
+
111
+ ## Dataset Structure
112
+
113
+ ### Data Instances
114
+
115
+ #### Dihana Corpus
116
+ For the `dihana` configuration one example from the dataset is:
117
+ ```
118
+ {
119
+ 'Speaker': 'U',
120
+ 'Utterance': 'Hola , quería obtener el horario para ir a Valencia',
121
+ 'Dialogue_Act': 9, # 'Pregunta' ('Request')
122
+ 'Dialogue_ID': '0',
123
+ 'File_ID': 'B209_BA5c3',
124
+ }
125
+ ```
126
+
127
+ #### iLISTEN Corpus
128
+ For the `ilisten` configuration one example from the dataset is:
129
+ ```
130
+ {
131
+ 'Speaker': 'T_11_U11',
132
+ 'Utterance': 'ok, grazie per le informazioni',
133
+ 'Dialogue_Act': 6, # 'KIND-ATTITUDE_SMALL-TALK'
134
+ 'Dialogue_ID': '0',
135
+ }
136
+ ```
137
+
138
+ #### LORIA Corpus
139
+ For the `loria` configuration one example from the dataset is:
140
+ ```
141
+ {
142
+ 'Speaker': 'Samir',
143
+ 'Utterance': 'Merci de votre visite, bonne chance, et à la prochaine !',
144
+ 'Dialogue_Act': 21, # 'quit'
145
+ 'Dialogue_ID': '5',
146
+ 'File_ID': 'Dial_20111128_113927',
147
+ }
148
+ ```
149
+
150
+ #### HCRC MapTask Corpus
151
+ For the `maptask` configuration one example from the dataset is:
152
+ ```
153
+ {
154
+ 'Speaker': 'f',
155
+ 'Utterance': 'is it underneath the rope bridge or to the left',
156
+ 'Dialogue_Act': 6, # 'query_w'
157
+ 'Dialogue_ID': '0',
158
+ 'File_ID': 'q4ec1',
159
+ }
160
+ ```
161
+
162
+ #### VERBMOBIL
163
+ For the `vm2` configuration one example from the dataset is:
164
+ ```
165
+ {
166
+ 'Utterance': 'ja was sind viereinhalb Stunden Bahngerüttel gegen siebzig Minuten Turbulenzen im Flugzeug',
167
+ 'Utterance': 'Utterance',
168
+ 'Dialogue_Act': 'Dialogue_Act', # 'INFORM'
169
+ 'Speaker': 'A',
170
+ 'Dialogue_ID': '66',
171
+ }
172
+ ```
173
+
174
+ ### Data Fields
175
+
176
+ For the `dihana` configuration, the different fields are:
177
+ - `Speaker`: identifier of the speaker as a string.
178
+ - `Utterance`: Utterance as a string.
179
+ - `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'Afirmacion' (0) [Feedback_positive], 'Apertura' (1) [Opening], 'Cierre' (2) [Closing], 'Confirmacion' (3) [Acknowledge], 'Espera' (4) [Hold], 'Indefinida' (5) [Undefined], 'Negacion' (6) [Feedback_negative], 'No_entendido' (7) [Request_clarify], 'Nueva_consulta' (8) [New_request], 'Pregunta' (9) [Request] or 'Respuesta' (10) [Reply].
180
+ - `Dialogue_ID`: identifier of the dialogue as a string.
181
+ - `File_ID`: identifier of the source file as a string.
182
+
183
+ For the `ilisten` configuration, the different fields are:
184
+ - `Speaker`: identifier of the speaker as a string.
185
+ - `Utterance`: Utterance as a string.
186
+ - `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'AGREE' (0), 'ANSWER' (1), 'CLOSING' (2), 'ENCOURAGE-SORRY' (3), 'GENERIC-ANSWER' (4), 'INFO-REQUEST' (5), 'KIND-ATTITUDE_SMALL-TALK' (6), 'OFFER-GIVE-INFO' (7), 'OPENING' (8), 'PERSUASION-SUGGEST' (9), 'QUESTION' (10), 'REJECT' (11), 'SOLICITATION-REQ_CLARIFICATION' (12), 'STATEMENT' (13) or 'TALK-ABOUT-SELF' (14).
187
+ - `Dialogue_ID`: identifier of the dialogue as a string.
188
+
189
+ For the `loria` configuration, the different fields are:
190
+ - `Speaker`: identifier of the speaker as a string.
191
+ - `Utterance`: Utterance as a string.
192
+ - `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'ack' (0), 'ask' (1), 'find_mold' (2), 'find_plans' (3), 'first_step' (4), 'greet' (5), 'help' (6), 'inform' (7), 'inform_engine' (8), 'inform_job' (9), 'inform_material_space' (10), 'informer_conditioner' (11), 'informer_decoration' (12), 'informer_elcomps' (13), 'informer_end_manufacturing' (14), 'kindAtt' (15), 'manufacturing_reqs' (16), 'next_step' (17), 'no' (18), 'other' (19), 'quality_control' (20), 'quit' (21), 'reqRep' (22), 'security_policies' (23), 'staff_enterprise' (24), 'staff_job' (25), 'studies_enterprise' (26), 'studies_job' (27), 'todo_failure' (28), 'todo_irreparable' (29), 'yes' (30)
193
+ - `Dialogue_ID`: identifier of the dialogue as a string.
194
+ - `File_ID`: identifier of the source file as a string.
195
+
196
+ For the `maptask` configuration, the different fields are:
197
+ - `Speaker`: identifier of the speaker as a string.
198
+ - `Utterance`: Utterance as a string.
199
+ - `Dialogue_Act`: Dialog act label of the utterance. It can be one of 'acknowledge' (0), 'align' (1), 'check' (2), 'clarify' (3), 'explain' (4), 'instruct' (5), 'query_w' (6), 'query_yn' (7), 'ready' (8), 'reply_n' (9), 'reply_w' (10) or 'reply_y' (11).
200
+ - `Dialogue_ID`: identifier of the dialogue as a string.
201
+ - `File_ID`: identifier of the source file as a string.
202
+
203
+ For the `vm2` configuration, the different fields are:
204
+ - `Utterance`: Utterance as a string.
205
+ - `Dialogue_Act`: Dialogue act label of the utterance. It can be one of 'ACCEPT' (0), 'BACKCHANNEL' (1), 'BYE' (2), 'CLARIFY' (3), 'CLOSE' (4), 'COMMIT' (5), 'CONFIRM' (6), 'DEFER' (7), 'DELIBERATE' (8), 'DEVIATE_SCENARIO' (9), 'EXCLUDE' (10), 'EXPLAINED_REJECT' (11), 'FEEDBACK' (12), 'FEEDBACK_NEGATIVE' (13), 'FEEDBACK_POSITIVE' (14), 'GIVE_REASON' (15), 'GREET' (16), 'INFORM' (17), 'INIT' (18), 'INTRODUCE' (19), 'NOT_CLASSIFIABLE' (20), 'OFFER' (21), 'POLITENESS_FORMULA' (22), 'REJECT' (23), 'REQUEST' (24), 'REQUEST_CLARIFY' (25), 'REQUEST_COMMENT' (26), 'REQUEST_COMMIT' (27), 'REQUEST_SUGGEST' (28), 'SUGGEST' (29), 'THANK' (30).
206
+ - `Speaker`: Speaker as a string.
207
+ - `Dialogue_ID`: identifier of the dialogue as a string.
208
+
209
+ ### Data Splits
210
+
211
+ | Dataset name | Train | Valid | Test |
212
+ | ------------ | ----- | ----- | ---- |
213
+ | dihana | 19063 | 2123 | 2361 |
214
+ | ilisten | 1986 | 230 | 971 |
215
+ | loria | 8465 | 942 | 1047 |
216
+ | maptask | 25382 | 5221 | 5335 |
217
+ | vm2 | 25060 | 2860 | 2855 |
218
+
219
+ ## Dataset Creation
220
+
221
+ ### Curation Rationale
222
+
223
+ [More Information Needed]
224
+
225
+ ### Source Data
226
+
227
+ #### Initial Data Collection and Normalization
228
+
229
+ [More Information Needed]
230
+
231
+ #### Who are the source language producers?
232
+
233
+ [More Information Needed]
234
+
235
+ ### Annotations
236
+
237
+ #### Annotation process
238
+
239
+ [More Information Needed]
240
+
241
+ #### Who are the annotators?
242
+
243
+ [More Information Needed]
244
+
245
+ ### Personal and Sensitive Information
246
+
247
+ [More Information Needed]
248
+
249
+ ## Considerations for Using the Data
250
+
251
+ ### Social Impact of Dataset
252
+
253
+ [More Information Needed]
254
+
255
+ ### Discussion of Biases
256
+
257
+ [More Information Needed]
258
+
259
+ ### Other Known Limitations
260
+
261
+ [More Information Needed]
262
+
263
+ ## Additional Information
264
+
265
+ ### Benchmark Curators
266
+
267
+ Anonymous
268
+
269
+ ### Licensing Information
270
+
271
+ This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Unported License](https://creativecommons.org/licenses/by-sa/4.0/).
272
+
273
+ ### Citation Information
274
+
275
+ ```
276
+ @unpublished{
277
+ anonymous2021cross-lingual,
278
+ title={Cross-Lingual Pretraining Methods for Spoken Dialog},
279
+ author={Anonymous},
280
+ journal={OpenReview Preprint},
281
+ year={2021},
282
+ url{https://openreview.net/forum?id=c1oDhu_hagR},
283
+ note={anonymous preprint under review}
284
+ }
285
+ ```
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"dihana": {"description": "Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and\nanalyzing natural language understanding systems specifically designed for spoken language. Datasets\nare in English, French, German, Italian and Spanish. They cover a variety of domains including\nspontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include\nemotion and/or sentimant labels.\n", "citation": "@inproceedings{benedi2006design,\ntitle={Design and acquisition of a telephone spontaneous speech dialogue corpus in Spanish: DIHANA},\nauthor={Bened{\\i}, Jos{'e}-Miguel and Lleida, Eduardo and Varona, Amparo and Castro, Mar{\\i}a-Jos{'e} and Galiano, Isabel and Justo, Raquel and L{'o}pez, I and Miguel, Antonio},\nbooktitle={Fifth International Conference on Language Resources and Evaluation (LREC)},\npages={1636--1639},\nyear={2006}\n}\n@inproceedings{post2013improved,\ntitle={Improved speech-to-text translation with the Fisher and Callhome Spanish--English speech translation corpus},\nauthor={Post, Matt and Kumar, Gaurav and Lopez, Adam and Karakos, Damianos and Callison-Burch, Chris and Khudanpur, Sanjeev},\nbooktitle={Proc. IWSLT},\nyear={2013}\n}\n@article{coria2005predicting,\ntitle={Predicting obligation dialogue acts from prosodic and speaker infomation},\nauthor={Coria, S and Pineda, L},\njournal={Research on Computing Science (ISSN 1665-9899), Centro de Investigacion en Computacion, Instituto Politecnico Nacional, Mexico City},\nyear={2005}\n}\n@inproceedings{anonymous,\n title = \"Cross-Lingual Pretraining Methods for Spoken Dialog\",\n author = \"Anonymous\",\n booktitle = \"Transactions of the Association for Computational Linguistics\",\n month = ,\n year = \"\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"\",\n doi = \"\",\n pages = \"\",\n abstract = \"There has been an increasing interest among NLP researchers towards learning generic\n representations. However, in the field of multilingual spoken dialogue systems, this problem\n remains overlooked. Indeed most of the pre-training methods focus on learning representations\n for written and non-conversational data or are restricted to the monolingual setting. In this\n work we (1) generalise existing losses to the multilingual setting, (2) develop a new set of\n losses to leverage parallel conversations when available. These losses improve the learning of\n representations by fostering the deep encoder to better learn contextual dependencies. The\n pre-training relies on OpenSubtitles, a huge multilingual corpus that is composed of 24.3G tokens;\n a by-product of the pre-processing includes multilingual aligned conversations. We also introduce\n two new multilingual tasks and a new benchmark on multilingual dialogue act labels called MIAM.\n We validate our pre-training on the three aforementioned tasks and show that our model using our\n newly designed losses achieves better performances than existing models. Our implementation will\n be available on github.com and pre-processed data will be available in Datasets (Wolf et al., 2020).\",\n}\n", "homepage": "", "license": "", "features": {"Speaker": {"dtype": "string", "id": null, "_type": "Value"}, "Utterance": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_Act": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_ID": {"dtype": "string", "id": null, "_type": "Value"}, "File_ID": {"dtype": "string", "id": null, "_type": "Value"}, "Label": {"num_classes": 11, "names": ["Afirmacion", "Apertura", "Cierre", "Confirmacion", "Espera", "Indefinida", "Negacion", "No_entendido", "Nueva_consulta", "Pregunta", "Respuesta"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "miam", "config_name": "dihana", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1946735, "num_examples": 19063, "dataset_name": "miam"}, "validation": {"name": "validation", "num_bytes": 216498, "num_examples": 2123, "dataset_name": "miam"}, "test": {"name": "test", "num_bytes": 238446, "num_examples": 2361, "dataset_name": "miam"}}, "download_checksums": {"https://raw.githubusercontent.com/eusip/MIAM/main/dihana/train.csv": {"num_bytes": 1441183, "checksum": "4702276f3494926fa1c751492b5385530d49ab4b8d4e583f89d5c1ecc9b69311"}, "https://raw.githubusercontent.com/eusip/MIAM/main/dihana/dev.csv": {"num_bytes": 160244, "checksum": "fa267399dbb66f8b096134a2a3f51d71dc48bfa5332dbc0c7b96b2eb0bd91097"}, "https://raw.githubusercontent.com/eusip/MIAM/main/dihana/test.csv": {"num_bytes": 175840, "checksum": "27e45d5f8f0655ed310589777fa9f9eda6a0727dbae277586c086e937c9aca28"}}, "download_size": 1777267, "post_processing_size": null, "dataset_size": 2401679, "size_in_bytes": 4178946}, "ilisten": {"description": "Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and\nanalyzing natural language understanding systems specifically designed for spoken language. Datasets\nare in English, French, German, Italian and Spanish. They cover a variety of domains including\nspontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include\nemotion and/or sentimant labels.\n", "citation": "@article{basile2018overview,\ntitle={Overview of the Evalita 2018itaLIan Speech acT labEliNg (iLISTEN) Task},\nauthor={Basile, Pierpaolo and Novielli, Nicole},\njournal={EVALITA Evaluation of NLP and Speech Tools for Italian},\nvolume={12},\npages={44},\nyear={2018}\n}\n@inproceedings{anonymous,\n title = \"Cross-Lingual Pretraining Methods for Spoken Dialog\",\n author = \"Anonymous\",\n booktitle = \"Transactions of the Association for Computational Linguistics\",\n month = ,\n year = \"\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"\",\n doi = \"\",\n pages = \"\",\n abstract = \"There has been an increasing interest among NLP researchers towards learning generic\n representations. However, in the field of multilingual spoken dialogue systems, this problem\n remains overlooked. Indeed most of the pre-training methods focus on learning representations\n for written and non-conversational data or are restricted to the monolingual setting. In this\n work we (1) generalise existing losses to the multilingual setting, (2) develop a new set of\n losses to leverage parallel conversations when available. These losses improve the learning of\n representations by fostering the deep encoder to better learn contextual dependencies. The\n pre-training relies on OpenSubtitles, a huge multilingual corpus that is composed of 24.3G tokens;\n a by-product of the pre-processing includes multilingual aligned conversations. We also introduce\n two new multilingual tasks and a new benchmark on multilingual dialogue act labels called MIAM.\n We validate our pre-training on the three aforementioned tasks and show that our model using our\n newly designed losses achieves better performances than existing models. Our implementation will\n be available on github.com and pre-processed data will be available in Datasets (Wolf et al., 2020).\",\n}\n", "homepage": "", "license": "", "features": {"Speaker": {"dtype": "string", "id": null, "_type": "Value"}, "Utterance": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_Act": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_ID": {"dtype": "string", "id": null, "_type": "Value"}, "Label": {"num_classes": 15, "names": ["AGREE", "ANSWER", "CLOSING", "ENCOURAGE-SORRY", "GENERIC-ANSWER", "INFO-REQUEST", "KIND-ATTITUDE_SMALL-TALK", "OFFER-GIVE-INFO", "OPENING", "PERSUASION-SUGGEST", "QUESTION", "REJECT", "SOLICITATION-REQ_CLARIFICATION", "STATEMENT", "TALK-ABOUT-SELF"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "miam", "config_name": "ilisten", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 244336, "num_examples": 1986, "dataset_name": "miam"}, "validation": {"name": "validation", "num_bytes": 33988, "num_examples": 230, "dataset_name": "miam"}, "test": {"name": "test", "num_bytes": 145376, "num_examples": 971, "dataset_name": "miam"}}, "download_checksums": {"https://raw.githubusercontent.com/eusip/MIAM/main/ilisten/train.csv": {"num_bytes": 198219, "checksum": "75ea972e2ac1cac8b2e77f55574b18d218e355eb4585b9b95d6873ebd9bdcd04"}, "https://raw.githubusercontent.com/eusip/MIAM/main/ilisten/dev.csv": {"num_bytes": 28741, "checksum": "745faed68a4471a81ad65300e5476fba94d4740ec113df529a2bcd5dd2439971"}, "https://raw.githubusercontent.com/eusip/MIAM/main/ilisten/test.csv": {"num_bytes": 123033, "checksum": "6d2abc758426747b1d271766783d4126756c7372fd92ad3baa0bda67d1de0c77"}}, "download_size": 349993, "post_processing_size": null, "dataset_size": 423700, "size_in_bytes": 773693}, "loria": {"description": "Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and\nanalyzing natural language understanding systems specifically designed for spoken language. Datasets\nare in English, French, German, Italian and Spanish. They cover a variety of domains including\nspontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include\nemotion and/or sentimant labels.\n", "citation": "@inproceedings{barahona2012building,\ntitle={Building and exploiting a corpus of dialog interactions between french speaking virtual and human agents},\nauthor={Barahona, Lina Maria Rojas and Lorenzo, Alejandra and Gardent, Claire},\nbooktitle={The eighth international conference on Language Resources and Evaluation (LREC)},\npages={1428--1435},\nyear={2012}\n}\n@inproceedings{anonymous,\n title = \"Cross-Lingual Pretraining Methods for Spoken Dialog\",\n author = \"Anonymous\",\n booktitle = \"Transactions of the Association for Computational Linguistics\",\n month = ,\n year = \"\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"\",\n doi = \"\",\n pages = \"\",\n abstract = \"There has been an increasing interest among NLP researchers towards learning generic\n representations. However, in the field of multilingual spoken dialogue systems, this problem\n remains overlooked. Indeed most of the pre-training methods focus on learning representations\n for written and non-conversational data or are restricted to the monolingual setting. In this\n work we (1) generalise existing losses to the multilingual setting, (2) develop a new set of\n losses to leverage parallel conversations when available. These losses improve the learning of\n representations by fostering the deep encoder to better learn contextual dependencies. The\n pre-training relies on OpenSubtitles, a huge multilingual corpus that is composed of 24.3G tokens;\n a by-product of the pre-processing includes multilingual aligned conversations. We also introduce\n two new multilingual tasks and a new benchmark on multilingual dialogue act labels called MIAM.\n We validate our pre-training on the three aforementioned tasks and show that our model using our\n newly designed losses achieves better performances than existing models. Our implementation will\n be available on github.com and pre-processed data will be available in Datasets (Wolf et al., 2020).\",\n}\n", "homepage": "", "license": "", "features": {"Speaker": {"dtype": "string", "id": null, "_type": "Value"}, "Utterance": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_Act": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_ID": {"dtype": "string", "id": null, "_type": "Value"}, "File_ID": {"dtype": "string", "id": null, "_type": "Value"}, "Label": {"num_classes": 31, "names": ["ack", "ask", "find_mold", "find_plans", "first_step", "greet", "help", "inform", "inform_engine", "inform_job", "inform_material_space", "informer_conditioner", "informer_decoration", "informer_elcomps", "informer_end_manufacturing", "kindAtt", "manufacturing_reqs", "next_step", "no", "other", "quality_control", "quit", "reqRep", "security_policies", "staff_enterprise", "staff_job", "studies_enterprise", "studies_job", "todo_failure", "todo_irreparable", "yes"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "miam", "config_name": "loria", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1208730, "num_examples": 8465, "dataset_name": "miam"}, "validation": {"name": "validation", "num_bytes": 133829, "num_examples": 942, "dataset_name": "miam"}, "test": {"name": "test", "num_bytes": 149855, "num_examples": 1047, "dataset_name": "miam"}}, "download_checksums": {"https://raw.githubusercontent.com/eusip/MIAM/main/loria/train.csv": {"num_bytes": 989066, "checksum": "0cda3440cdf0157f1a70617374842f04a3b7cf8c5175cca9f3fe9a33a5105ddf"}, "https://raw.githubusercontent.com/eusip/MIAM/main/loria/dev.csv": {"num_bytes": 109364, "checksum": "02f99287f69dc869926aaf077d6c3d0a81cb2576c80721c9272e39a2b226d989"}, "https://raw.githubusercontent.com/eusip/MIAM/main/loria/test.csv": {"num_bytes": 122702, "checksum": "8e2f5e513761970aad138332fb83488ca52c943492153c08199ddc4ae8fe4209"}}, "download_size": 1221132, "post_processing_size": null, "dataset_size": 1492414, "size_in_bytes": 2713546}, "maptask": {"description": "Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and\nanalyzing natural language understanding systems specifically designed for spoken language. Datasets\nare in English, French, German, Italian and Spanish. They cover a variety of domains including\nspontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include\nemotion and/or sentimant labels.\n", "citation": "@inproceedings{thompson1993hcrc,\ntitle={The HCRC map task corpus: natural dialogue for speech recognition},\nauthor={Thompson, Henry S and Anderson, Anne H and Bard, Ellen Gurman and Doherty-Sneddon,\nGwyneth and Newlands, Alison and Sotillo, Cathy},\nbooktitle={HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993},\nyear={1993}\n}\n@inproceedings{anonymous,\n title = \"Cross-Lingual Pretraining Methods for Spoken Dialog\",\n author = \"Anonymous\",\n booktitle = \"Transactions of the Association for Computational Linguistics\",\n month = ,\n year = \"\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"\",\n doi = \"\",\n pages = \"\",\n abstract = \"There has been an increasing interest among NLP researchers towards learning generic\n representations. However, in the field of multilingual spoken dialogue systems, this problem\n remains overlooked. Indeed most of the pre-training methods focus on learning representations\n for written and non-conversational data or are restricted to the monolingual setting. In this\n work we (1) generalise existing losses to the multilingual setting, (2) develop a new set of\n losses to leverage parallel conversations when available. These losses improve the learning of\n representations by fostering the deep encoder to better learn contextual dependencies. The\n pre-training relies on OpenSubtitles, a huge multilingual corpus that is composed of 24.3G tokens;\n a by-product of the pre-processing includes multilingual aligned conversations. We also introduce\n two new multilingual tasks and a new benchmark on multilingual dialogue act labels called MIAM.\n We validate our pre-training on the three aforementioned tasks and show that our model using our\n newly designed losses achieves better performances than existing models. Our implementation will\n be available on github.com and pre-processed data will be available in Datasets (Wolf et al., 2020).\",\n}\n", "homepage": "http://groups.inf.ed.ac.uk/maptask/", "license": "", "features": {"Speaker": {"dtype": "string", "id": null, "_type": "Value"}, "Utterance": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_Act": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_ID": {"dtype": "string", "id": null, "_type": "Value"}, "File_ID": {"dtype": "string", "id": null, "_type": "Value"}, "Label": {"num_classes": 12, "names": ["acknowledge", "align", "check", "clarify", "explain", "instruct", "query_w", "query_yn", "ready", "reply_n", "reply_w", "reply_y"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "miam", "config_name": "maptask", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1910120, "num_examples": 25382, "dataset_name": "miam"}, "validation": {"name": "validation", "num_bytes": 389879, "num_examples": 5221, "dataset_name": "miam"}, "test": {"name": "test", "num_bytes": 396947, "num_examples": 5335, "dataset_name": "miam"}}, "download_checksums": {"https://raw.githubusercontent.com/eusip/MIAM/main/maptask/train.csv": {"num_bytes": 1226569, "checksum": "76ce790d9c5100d2f2b1f535edd8e8b5a6d88d6bbaa7f3a948dd95bff1e2e798"}, "https://raw.githubusercontent.com/eusip/MIAM/main/maptask/dev.csv": {"num_bytes": 249215, "checksum": "895ca2761e8224b02f963df6836912c0a913362f5d44dfc4813391b51919f147"}, "https://raw.githubusercontent.com/eusip/MIAM/main/maptask/test.csv": {"num_bytes": 253237, "checksum": "e11a1fbaa4ffc74c0b438b2d3e6f17f991adee50e368c2c9c57f98ef6a2dd0c3"}}, "download_size": 1729021, "post_processing_size": null, "dataset_size": 2696946, "size_in_bytes": 4425967}, "vm2": {"description": "Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and\nanalyzing natural language understanding systems specifically designed for spoken language. Datasets\nare in English, French, German, Italian and Spanish. They cover a variety of domains including\nspontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include\nemotion and/or sentimant labels.\n", "citation": "@book{kay1992verbmobil,\ntitle={Verbmobil: A translation system for face-to-face dialog},\nauthor={Kay, Martin},\nyear={1992},\npublisher={University of Chicago Press}\n}\n@inproceedings{anonymous,\n title = \"Cross-Lingual Pretraining Methods for Spoken Dialog\",\n author = \"Anonymous\",\n booktitle = \"Transactions of the Association for Computational Linguistics\",\n month = ,\n year = \"\",\n address = \"Online\",\n publisher = \"Association for Computational Linguistics\",\n url = \"\",\n doi = \"\",\n pages = \"\",\n abstract = \"There has been an increasing interest among NLP researchers towards learning generic\n representations. However, in the field of multilingual spoken dialogue systems, this problem\n remains overlooked. Indeed most of the pre-training methods focus on learning representations\n for written and non-conversational data or are restricted to the monolingual setting. In this\n work we (1) generalise existing losses to the multilingual setting, (2) develop a new set of\n losses to leverage parallel conversations when available. These losses improve the learning of\n representations by fostering the deep encoder to better learn contextual dependencies. The\n pre-training relies on OpenSubtitles, a huge multilingual corpus that is composed of 24.3G tokens;\n a by-product of the pre-processing includes multilingual aligned conversations. We also introduce\n two new multilingual tasks and a new benchmark on multilingual dialogue act labels called MIAM.\n We validate our pre-training on the three aforementioned tasks and show that our model using our\n newly designed losses achieves better performances than existing models. Our implementation will\n be available on github.com and pre-processed data will be available in Datasets (Wolf et al., 2020).\",\n}\n", "homepage": "", "license": "", "features": {"Utterance": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_Act": {"dtype": "string", "id": null, "_type": "Value"}, "Speaker": {"dtype": "string", "id": null, "_type": "Value"}, "Dialogue_ID": {"dtype": "string", "id": null, "_type": "Value"}, "Label": {"num_classes": 31, "names": ["ACCEPT", "BACKCHANNEL", "BYE", "CLARIFY", "CLOSE", "COMMIT", "CONFIRM", "DEFER", "DELIBERATE", "DEVIATE_SCENARIO", "EXCLUDE", "EXPLAINED_REJECT", "FEEDBACK", "FEEDBACK_NEGATIVE", "FEEDBACK_POSITIVE", "GIVE_REASON", "GREET", "INFORM", "INIT", "INTRODUCE", "NOT_CLASSIFIABLE", "OFFER", "POLITENESS_FORMULA", "REJECT", "REQUEST", "REQUEST_CLARIFY", "REQUEST_COMMENT", "REQUEST_COMMIT", "REQUEST_SUGGEST", "SUGGEST", "THANK"], "names_file": null, "id": null, "_type": "ClassLabel"}, "Idx": {"dtype": "int32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "miam", "config_name": "vm2", "version": {"version_str": "1.0.0", "description": "", "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1869254, "num_examples": 25060, "dataset_name": "miam"}, "validation": {"name": "validation", "num_bytes": 209390, "num_examples": 2860, "dataset_name": "miam"}, "test": {"name": "test", "num_bytes": 209032, "num_examples": 2855, "dataset_name": "miam"}}, "download_checksums": {"https://raw.githubusercontent.com/eusip/MIAM/main/vm2/train.csv": {"num_bytes": 1342990, "checksum": "5b5ddbd333a57f033a01e9b4135c7675d56a3a819258f85cee40e6adef53f7f7"}, "https://raw.githubusercontent.com/eusip/MIAM/main/vm2/dev.csv": {"num_bytes": 149358, "checksum": "30c5ba7219d113cdb4260488b0f7e515347f8c06ca5e0345b701ccfc229c41be"}, "https://raw.githubusercontent.com/eusip/MIAM/main/vm2/test.csv": {"num_bytes": 149105, "checksum": "a94db1a8ece862084624ffb0774b9b22d180a30a778a19613b963e4c244c9da3"}}, "download_size": 1641453, "post_processing_size": null, "dataset_size": 2287676, "size_in_bytes": 3929129}}
dummy/dihana/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8809b1becaafced1402ccedf528ab36a8c4bae3de01cad9b992250a418590069
3
+ size 1101
dummy/ilisten/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95ed08efe28bd23b4af687b0af96c48b817fe93d0928938fb39fd8752bbe875e
3
+ size 1367
dummy/loria/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd15bcbe33011fc79d8ac8434978e3375069a9e3b28b3a3cdf8343d2741d5eaf
3
+ size 1266
dummy/maptask/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1bd0b4d17d0e03ca14e1b4e6fd95fa7c951c47e46663ecaf17dcf35870f4321d
3
+ size 900
dummy/vm2/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:902af73b121bb8c2896412035d4e8d2e2669fbac3d060b40ff76cf12688beff8
3
+ size 1114
miam.py ADDED
@@ -0,0 +1,436 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """The Multilingual dIalogAct benchMark."""
18
+
19
+
20
+ import textwrap
21
+
22
+ import pandas as pd
23
+
24
+ import datasets
25
+
26
+
27
+ _MIAM_CITATION = """\
28
+ @unpublished{
29
+ anonymous2021cross-lingual,
30
+ title={Cross-Lingual Pretraining Methods for Spoken Dialog},
31
+ author={Anonymous},
32
+ journal={OpenReview Preprint},
33
+ year={2021},
34
+ url{https://openreview.net/forum?id=c1oDhu_hagR},
35
+ note={anonymous preprint under review}
36
+ }
37
+ """
38
+
39
+ _MIAM_DESCRIPTION = """\
40
+ Multilingual dIalogAct benchMark is a collection of resources for training, evaluating, and
41
+ analyzing natural language understanding systems specifically designed for spoken language. Datasets
42
+ are in English, French, German, Italian and Spanish. They cover a variety of domains including
43
+ spontaneous speech, scripted scenarios, and joint task completion. Some datasets additionally include
44
+ emotion and/or sentimant labels.
45
+ """
46
+
47
+ _URL = "https://raw.githubusercontent.com/eusip/MIAM/main"
48
+
49
+ DIHANA_DA_DESCRIPTION = {
50
+ "Afirmacion": "Feedback_positive",
51
+ "Apertura": "Opening",
52
+ "Cierre": "Closing",
53
+ "Confirmacion": "Acknowledge",
54
+ "Espera": "Hold",
55
+ "Indefinida": "Undefined",
56
+ "Negacion": "Feedback_negative",
57
+ "No_entendido": "Request_clarify",
58
+ "Nueva_consulta": "New_request",
59
+ "Pregunta": "Request",
60
+ "Respuesta": "Reply",
61
+ }
62
+
63
+
64
+ class MiamConfig(datasets.BuilderConfig):
65
+ """BuilderConfig for MIAM."""
66
+
67
+ def __init__(
68
+ self,
69
+ text_features,
70
+ label_column,
71
+ data_url,
72
+ citation,
73
+ url,
74
+ label_classes=None,
75
+ **kwargs,
76
+ ):
77
+ """BuilderConfig for MIAM.
78
+ Args:
79
+ text_features: `dict[string, string]`, map from the name of the feature
80
+ dict for each text field to the name of the column in the tsv file
81
+ label_column: `string`, name of the column in the csv/txt file corresponding
82
+ to the label
83
+ data_url: `string`, url to download the csv/text file from
84
+ citation: `string`, citation for the data set
85
+ url: `string`, url for information about the data set
86
+ label_classes: `list[string]`, the list of classes if the label is
87
+ categorical. If not provided, then the label will be of type
88
+ `datasets.Value('float32')`.
89
+ **kwargs: keyword arguments forwarded to super.
90
+ """
91
+ super(MiamConfig, self).__init__(version=datasets.Version("1.0.0", ""), **kwargs)
92
+ self.text_features = text_features
93
+ self.label_column = label_column
94
+ self.label_classes = label_classes
95
+ self.data_url = data_url
96
+ self.citation = citation
97
+ self.url = url
98
+
99
+
100
+ class Miam(datasets.GeneratorBasedBuilder):
101
+ """The Multilingual dIalogAct benchMark."""
102
+
103
+ BUILDER_CONFIGS = [
104
+ MiamConfig(
105
+ name="dihana",
106
+ description=textwrap.dedent(
107
+ """\
108
+ The Dihana corpus primarily consists of spontaneous speech. The corpus is annotated
109
+ using three different levels of labels. The first level is dedicated to the generic
110
+ task-independent DA and the two additional are made with task-specific information. We
111
+ focus on the 11 first level tags."""
112
+ ),
113
+ text_features={
114
+ "Speaker": "Speaker",
115
+ "Utterance": "Utterance",
116
+ "Dialogue_Act": "Dialogue_Act",
117
+ "Dialogue_ID": "Dialogue_ID",
118
+ "File_ID": "File_ID",
119
+ },
120
+ label_classes=list(DIHANA_DA_DESCRIPTION.keys()),
121
+ label_column="Dialogue_Act",
122
+ data_url={
123
+ "train": _URL + "/dihana/train.csv",
124
+ "dev": _URL + "/dihana/dev.csv",
125
+ "test": _URL + "/dihana/test.csv",
126
+ },
127
+ citation=textwrap.dedent(
128
+ """\
129
+ @inproceedings{benedi2006design,
130
+ title={Design and acquisition of a telephone spontaneous speech dialogue corpus in Spanish: DIHANA},
131
+ author={Bened{\'i}, Jos{\'e}-Miguel and Lleida, Eduardo and Varona, Amparo and Castro, Mar{\'i}a-Jos{\'e} and Galiano, Isabel and Justo, Raquel and L{\'o}pez, I and Miguel, Antonio},
132
+ booktitle={Fifth International Conference on Language Resources and Evaluation (LREC)},
133
+ pages={1636--1639},
134
+ year={2006}
135
+ }
136
+ @inproceedings{post2013improved,
137
+ title={Improved speech-to-text translation with the Fisher and Callhome Spanish--English speech translation corpus},
138
+ author={Post, Matt and Kumar, Gaurav and Lopez, Adam and Karakos, Damianos and Callison-Burch, Chris and Khudanpur, Sanjeev},
139
+ booktitle={Proc. IWSLT},
140
+ year={2013}
141
+ }
142
+ @article{coria2005predicting,
143
+ title={Predicting obligation dialogue acts from prosodic and speaker infomation},
144
+ author={Coria, S and Pineda, L},
145
+ journal={Research on Computing Science (ISSN 1665-9899), Centro de Investigacion en Computacion, Instituto Politecnico Nacional, Mexico City},
146
+ year={2005}
147
+ }"""
148
+ ),
149
+ url="",
150
+ ),
151
+ MiamConfig(
152
+ name="ilisten",
153
+ description=textwrap.dedent(
154
+ """\
155
+ "itaLIan Speech acT labEliNg" (iLISTEN) is a corpus of annotated dialogue turns labeled
156
+ for speech acts."""
157
+ ),
158
+ text_features={
159
+ "Speaker": "Speaker",
160
+ "Utterance": "Utterance",
161
+ "Dialogue_Act": "Dialogue_Act",
162
+ "Dialogue_ID": "Dialogue_ID",
163
+ },
164
+ label_classes=[
165
+ "AGREE",
166
+ "ANSWER",
167
+ "CLOSING",
168
+ "ENCOURAGE-SORRY",
169
+ "GENERIC-ANSWER",
170
+ "INFO-REQUEST",
171
+ "KIND-ATTITUDE_SMALL-TALK",
172
+ "OFFER-GIVE-INFO",
173
+ "OPENING",
174
+ "PERSUASION-SUGGEST",
175
+ "QUESTION",
176
+ "REJECT",
177
+ "SOLICITATION-REQ_CLARIFICATION",
178
+ "STATEMENT",
179
+ "TALK-ABOUT-SELF",
180
+ ],
181
+ label_column="Dialogue_Act",
182
+ data_url={
183
+ "train": _URL + "/ilisten/train.csv",
184
+ "dev": _URL + "/ilisten/dev.csv",
185
+ "test": _URL + "/ilisten/test.csv",
186
+ },
187
+ citation=textwrap.dedent(
188
+ """\
189
+ @article{basile2018overview,
190
+ title={Overview of the Evalita 2018itaLIan Speech acT labEliNg (iLISTEN) Task},
191
+ author={Basile, Pierpaolo and Novielli, Nicole},
192
+ journal={EVALITA Evaluation of NLP and Speech Tools for Italian},
193
+ volume={12},
194
+ pages={44},
195
+ year={2018}
196
+ }"""
197
+ ),
198
+ url="",
199
+ ),
200
+ MiamConfig(
201
+ name="loria",
202
+ description=textwrap.dedent(
203
+ """\
204
+ The LORIA Nancy dialog corpus is derived from human-machine interactions in a serious
205
+ game setting."""
206
+ ),
207
+ text_features={
208
+ "Speaker": "Speaker",
209
+ "Utterance": "Utterance",
210
+ "Dialogue_Act": "Dialogue_Act",
211
+ "Dialogue_ID": "Dialogue_ID",
212
+ "File_ID": "File_ID",
213
+ },
214
+ label_classes=[
215
+ "ack",
216
+ "ask",
217
+ "find_mold",
218
+ "find_plans",
219
+ "first_step",
220
+ "greet",
221
+ "help",
222
+ "inform",
223
+ "inform_engine",
224
+ "inform_job",
225
+ "inform_material_space",
226
+ "informer_conditioner",
227
+ "informer_decoration",
228
+ "informer_elcomps",
229
+ "informer_end_manufacturing",
230
+ "kindAtt",
231
+ "manufacturing_reqs",
232
+ "next_step",
233
+ "no",
234
+ "other",
235
+ "quality_control",
236
+ "quit",
237
+ "reqRep",
238
+ "security_policies",
239
+ "staff_enterprise",
240
+ "staff_job",
241
+ "studies_enterprise",
242
+ "studies_job",
243
+ "todo_failure",
244
+ "todo_irreparable",
245
+ "yes",
246
+ ],
247
+ label_column="Dialogue_Act",
248
+ data_url={
249
+ "train": _URL + "/loria/train.csv",
250
+ "dev": _URL + "/loria/dev.csv",
251
+ "test": _URL + "/loria/test.csv",
252
+ },
253
+ citation=textwrap.dedent(
254
+ """\
255
+ @inproceedings{barahona2012building,
256
+ title={Building and exploiting a corpus of dialog interactions between french speaking virtual and human agents},
257
+ author={Barahona, Lina Maria Rojas and Lorenzo, Alejandra and Gardent, Claire},
258
+ booktitle={The eighth international conference on Language Resources and Evaluation (LREC)},
259
+ pages={1428--1435},
260
+ year={2012}
261
+ }"""
262
+ ),
263
+ url="",
264
+ ),
265
+ MiamConfig(
266
+ name="maptask",
267
+ description=textwrap.dedent(
268
+ """\
269
+ The HCRC MapTask Corpus was constructed through the verbal collaboration of participants
270
+ in order to construct a map route. This corpus is small (27k utterances). As there is
271
+ no standard train/dev/test split performance depends on the split."""
272
+ ),
273
+ text_features={
274
+ "Speaker": "Speaker",
275
+ "Utterance": "Utterance",
276
+ "Dialogue_Act": "Dialogue_Act",
277
+ "Dialogue_ID": "Dialogue_ID",
278
+ "File_ID": "File_ID",
279
+ },
280
+ label_classes=[
281
+ "acknowledge",
282
+ "align",
283
+ "check",
284
+ "clarify",
285
+ "explain",
286
+ "instruct",
287
+ "query_w",
288
+ "query_yn",
289
+ "ready",
290
+ "reply_n",
291
+ "reply_w",
292
+ "reply_y",
293
+ ],
294
+ label_column="Dialogue_Act",
295
+ data_url={
296
+ "train": _URL + "/maptask/train.csv",
297
+ "dev": _URL + "/maptask/dev.csv",
298
+ "test": _URL + "/maptask/test.csv",
299
+ },
300
+ citation=textwrap.dedent(
301
+ """\
302
+ @inproceedings{thompson1993hcrc,
303
+ title={The HCRC map task corpus: natural dialogue for speech recognition},
304
+ author={Thompson, Henry S and Anderson, Anne H and Bard, Ellen Gurman and Doherty-Sneddon,
305
+ Gwyneth and Newlands, Alison and Sotillo, Cathy},
306
+ booktitle={HUMAN LANGUAGE TECHNOLOGY: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993},
307
+ year={1993}
308
+ }"""
309
+ ),
310
+ url="http://groups.inf.ed.ac.uk/maptask/",
311
+ ),
312
+ MiamConfig(
313
+ name="vm2",
314
+ description=textwrap.dedent(
315
+ """\
316
+ The VERBMOBIL corpus consist of transcripts of multi-party meetings hand-annotated with
317
+ dialog acts. It is the second biggest dataset with around 110k utterances."""
318
+ ),
319
+ text_features={
320
+ "Utterance": "Utterance",
321
+ "Dialogue_Act": "Dialogue_Act",
322
+ "Speaker": "Speaker",
323
+ "Dialogue_ID": "Dialogue_ID",
324
+ },
325
+ label_classes=[
326
+ "ACCEPT",
327
+ "BACKCHANNEL",
328
+ "BYE",
329
+ "CLARIFY",
330
+ "CLOSE",
331
+ "COMMIT",
332
+ "CONFIRM",
333
+ "DEFER",
334
+ "DELIBERATE",
335
+ "DEVIATE_SCENARIO",
336
+ "EXCLUDE",
337
+ "EXPLAINED_REJECT",
338
+ "FEEDBACK",
339
+ "FEEDBACK_NEGATIVE",
340
+ "FEEDBACK_POSITIVE",
341
+ "GIVE_REASON",
342
+ "GREET",
343
+ "INFORM",
344
+ "INIT",
345
+ "INTRODUCE",
346
+ "NOT_CLASSIFIABLE",
347
+ "OFFER",
348
+ "POLITENESS_FORMULA",
349
+ "REJECT",
350
+ "REQUEST",
351
+ "REQUEST_CLARIFY",
352
+ "REQUEST_COMMENT",
353
+ "REQUEST_COMMIT",
354
+ "REQUEST_SUGGEST",
355
+ "SUGGEST",
356
+ "THANK",
357
+ ],
358
+ label_column="Dialogue_Act",
359
+ data_url={
360
+ "train": _URL + "/vm2/train.csv",
361
+ "dev": _URL + "/vm2/dev.csv",
362
+ "test": _URL + "/vm2/test.csv",
363
+ },
364
+ citation=textwrap.dedent(
365
+ """\
366
+ @book{kay1992verbmobil,
367
+ title={Verbmobil: A translation system for face-to-face dialog},
368
+ author={Kay, Martin},
369
+ year={1992},
370
+ publisher={University of Chicago Press}
371
+ }"""
372
+ ),
373
+ url="",
374
+ ),
375
+ ]
376
+
377
+ def _info(self):
378
+ features = {text_feature: datasets.Value("string") for text_feature in self.config.text_features.keys()}
379
+ if self.config.label_classes:
380
+ features["Label"] = datasets.features.ClassLabel(names=self.config.label_classes)
381
+ features["Idx"] = datasets.Value("int32")
382
+ return datasets.DatasetInfo(
383
+ description=_MIAM_DESCRIPTION,
384
+ features=datasets.Features(features),
385
+ homepage=self.config.url,
386
+ citation=self.config.citation + "\n" + _MIAM_CITATION,
387
+ )
388
+
389
+ def _split_generators(self, dl_manager):
390
+ data_files = dl_manager.download(self.config.data_url)
391
+ splits = []
392
+ splits.append(
393
+ datasets.SplitGenerator(
394
+ name=datasets.Split.TRAIN,
395
+ gen_kwargs={
396
+ "data_file": data_files["train"],
397
+ "split": "train",
398
+ },
399
+ )
400
+ )
401
+ splits.append(
402
+ datasets.SplitGenerator(
403
+ name=datasets.Split.VALIDATION,
404
+ gen_kwargs={
405
+ "data_file": data_files["dev"],
406
+ "split": "dev",
407
+ },
408
+ )
409
+ )
410
+ splits.append(
411
+ datasets.SplitGenerator(
412
+ name=datasets.Split.TEST,
413
+ gen_kwargs={
414
+ "data_file": data_files["test"],
415
+ "split": "test",
416
+ },
417
+ )
418
+ )
419
+ return splits
420
+
421
+ def _generate_examples(self, data_file, split):
422
+ df = pd.read_csv(data_file, delimiter=",", header=0, quotechar='"', dtype=str)[
423
+ self.config.text_features.keys()
424
+ ]
425
+
426
+ rows = df.to_dict(orient="records")
427
+
428
+ for n, row in enumerate(rows):
429
+ example = row
430
+ example["Idx"] = n
431
+
432
+ if self.config.label_column in example:
433
+ label = example[self.config.label_column]
434
+ example["Label"] = label
435
+
436
+ yield example["Idx"], example