Datasets:

Multilinguality:
ar
de
ja
Size Categories:
100K<n<1M
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
Tags:
License:
afaji commited on
Commit
89922d6
1 Parent(s): 4b8ca92

format adjustment + add readme

Browse files
Files changed (2) hide show
  1. README.md +223 -0
  2. mintaka.py +8 -9
README.md ADDED
@@ -0,0 +1,223 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ license:
7
+ - cc-by-4.0
8
+ multilinguality:
9
+ - ar
10
+ - de
11
+ - ja
12
+ - hi
13
+ - pt
14
+ - en
15
+ - es
16
+ - it
17
+ - fr
18
+ size_categories:
19
+ - 100K<n<1M
20
+ source_datasets:
21
+ - original
22
+ task_categories:
23
+ - question-answering
24
+ task_ids:
25
+ - open-domain-qa
26
+ paperswithcode_id: mintaka
27
+ pretty_name: Mintaka
28
+ language_bcp47:
29
+ - ar-SA
30
+ - de-DE
31
+ - ja-JP
32
+ - hi-HI
33
+ - pt-PT
34
+ - en-EN
35
+ - es-ES
36
+ - it-IT
37
+ - fr-FR
38
+ ---
39
+
40
+ # MASSIVE: A 1M-Example Multilingual Natural Language Understanding Dataset with 51 Typologically-Diverse Languages
41
+
42
+ ## Table of Contents
43
+ - [Dataset Description](#dataset-description)
44
+ - [Dataset Summary](#dataset-summary)
45
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
46
+ - [Languages](#languages)
47
+ - [Dataset Structure](#dataset-structure)
48
+ - [Data Instances](#data-instances)
49
+ - [Data Fields](#data-fields)
50
+ - [Data Splits](#data-splits)
51
+ - [Dataset Creation](#dataset-creation)
52
+ - [Curation Rationale](#curation-rationale)
53
+ - [Source Data](#source-data)
54
+ - [Annotations](#annotations)
55
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
56
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
57
+ - [Social Impact of Dataset](#social-impact-of-dataset)
58
+ - [Discussion of Biases](#discussion-of-biases)
59
+ - [Other Known Limitations](#other-known-limitations)
60
+ - [Additional Information](#additional-information)
61
+ - [Dataset Curators](#dataset-curators)
62
+ - [Licensing Information](#licensing-information)
63
+ - [Citation Information](#citation-information)
64
+ - [Contributions](#contributions)
65
+
66
+ ## Dataset Description
67
+ - **Homepage:** https://github.com/alexa/massive
68
+ - **Repository:** https://github.com/alexa/massive
69
+ - **Paper:** https://aclanthology.org/2022.coling-1.138/
70
+ - **Point of Contact:** [GitHub](https://github.com/alexa/massive/issues)
71
+
72
+ ### Dataset Summary
73
+
74
+ Mintaka is a complex, natural, and multilingual question answering (QA) dataset composed of 20,000 question-answer pairs elicited from MTurk workers and annotated with Wikidata question and answer entities. Full details on the Mintaka dataset can be found in our paper: https://aclanthology.org/2022.coling-1.138/
75
+
76
+ To build Mintaka, we explicitly collected questions in 8 complexity types, as well as generic questions:
77
+
78
+ - Count (e.g., Q: How many astronauts have been elected to Congress? A: 4)
79
+ - Comparative (e.g., Q: Is Mont Blanc taller than Mount Rainier? A: Yes)
80
+ - Superlative (e.g., Q: Who was the youngest tribute in the Hunger Games? A: Rue)
81
+ - Ordinal (e.g., Q: Who was the last Ptolemaic ruler of Egypt? A: Cleopatra)
82
+ - Multi-hop (e.g., Q: Who was the quarterback of the team that won Super Bowl 50? A: Peyton Manning)
83
+ - Intersection (e.g., Q: Which movie was directed by Denis Villeneuve and stars Timothee Chalamet? A: Dune)
84
+ - Difference (e.g., Q: Which Mario Kart game did Yoshi not appear in? A: Mario Kart Live: Home Circuit)
85
+ - Yes/No (e.g., Q: Has Lady Gaga ever made a song with Ariana Grande? A: Yes.)
86
+ - Generic (e.g., Q: Where was Michael Phelps born? A: Baltimore, Maryland)
87
+ - We collected questions about 8 categories: Movies, Music, Sports, Books, Geography, Politics, Video Games, and History
88
+
89
+ Mintaka is one of the first large-scale complex, natural, and multilingual datasets that can be used for end-to-end question-answering models.
90
+
91
+ ### Supported Tasks and Leaderboards
92
+
93
+ The dataset can be used to train a model for question answering.
94
+ To ensure comparability, please refer to our evaluation script here: https://github.com/amazon-science/mintaka#evaluation
95
+
96
+ ### Languages
97
+
98
+ All questions were written in English and translated into 8 additional languages: Arabic, French, German, Hindi, Italian, Japanese, Portuguese, and Spanish.
99
+
100
+ ## Dataset Structure
101
+
102
+ ### Data Instances
103
+
104
+ An example of 'train' looks as follows.
105
+
106
+ ```json
107
+ {
108
+ "id": "a9011ddf",
109
+ "lang": "en",
110
+ "question": "What is the seventh tallest mountain in North America?",
111
+ "answerText": "Mount Lucania",
112
+ "category": "geography",
113
+ "complexityType": "ordinal",
114
+ "questionEntity":
115
+ [
116
+ {
117
+ "name": "Q49",
118
+ "entityType": "entity",
119
+ "label": "North America",
120
+ "mention": "North America",
121
+ "span": [40, 53]
122
+ },
123
+ {
124
+ "name": 7,
125
+ "entityType": "ordinal",
126
+ "mention": "seventh",
127
+ "span": [12, 19]
128
+ }
129
+ ],
130
+ "answerEntity":
131
+ [
132
+ {
133
+ "name": "Q1153188",
134
+ "label": "Mount Lucania",
135
+ }
136
+ ],
137
+ }
138
+ ```
139
+
140
+ ### Data Fields
141
+
142
+ The data fields are the same among all splits.
143
+
144
+ `id`: a unique ID for the given sample.
145
+
146
+ `lang`: the language of the question.
147
+
148
+ `question`: the original question elicited in the corresponding language.
149
+
150
+ `answerText`: the original answer text elicited in English.
151
+
152
+ `category`: the category of the question. Options are: geography, movies, history, books, politics, music, videogames, or sports
153
+
154
+ `complexityType`: the complexity type of the question. Options are: ordinal, intersection, count, superlative, yesno comparative, multihop, difference, or generic
155
+
156
+ `questionEntity`: a list of annotated question entities identified by crowd workers.
157
+ ```
158
+ {
159
+ "name": The Wikidata Q-code or numerical value of the entity
160
+ "entityType": The type of the entity. Options are:
161
+ entity, cardinal, ordinal, date, time, percent, quantity, or money
162
+ "label": The label of the Wikidata Q-code
163
+ "mention": The entity as it appears in the English question text. Will be empty for non-English samples.
164
+ "span": The start and end characters of the mention in the English question text. Will be empty for non-English samples.
165
+ }
166
+ ```
167
+ `answerEntity`: a list of annotated answer entities identified by crowd workers.
168
+ ```
169
+ {
170
+ "name": The Wikidata Q-code or numerical value of the entity
171
+ "label": The label of the Wikidata Q-code
172
+ }
173
+ ```
174
+
175
+ ### Data Splits
176
+
177
+ For each language, we split into train (14,000 samples), dev (2,000 samples), and test (4,000 samples) sets.
178
+
179
+ ### Personal and Sensitive Information
180
+
181
+ The corpora is free of personal or sensitive information.
182
+
183
+ ## Considerations for Using the Data
184
+ ### Social Impact of Dataset
185
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
+ ### Discussion of Biases
187
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
188
+ ### Other Known Limitations
189
+ [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
+
191
+ ## Additional Information
192
+
193
+ ### Dataset Curators
194
+
195
+ Amazon Alexa AI.
196
+
197
+ ### Licensing Information
198
+
199
+ This project is licensed under the CC-BY-4.0 License.
200
+
201
+ ### Citation Information
202
+
203
+ Please cite the following papers when using this dataset.
204
+
205
+ ```latex
206
+ @inproceedings{sen-etal-2022-mintaka,
207
+ title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
208
+ author = "Sen, Priyanka and
209
+ Aji, Alham Fikri and
210
+ Saffari, Amir",
211
+ booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
212
+ month = oct,
213
+ year = "2022",
214
+ address = "Gyeongju, Republic of Korea",
215
+ publisher = "International Committee on Computational Linguistics",
216
+ url = "https://aclanthology.org/2022.coling-1.138",
217
+ pages = "1604--1619"
218
+ }
219
+ ```
220
+
221
+ ### Contributions
222
+
223
+ Thanks to [@afaji](https://github.com/afaji) for adding this dataset.
mintaka.py CHANGED
@@ -19,9 +19,7 @@ _DESCRIPTION = """\
19
  _CITATION = """\
20
  @inproceedings{sen-etal-2022-mintaka,
21
  title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
22
- author = "Sen, Priyanka and
23
- Aji, Alham Fikri and
24
- Saffari, Amir",
25
  booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
26
  month = oct,
27
  year = "2022",
@@ -131,15 +129,16 @@ class Mintaka(datasets.GeneratorBasedBuilder):
131
 
132
  with open(file, encoding='utf-8') as json_file:
133
  data = json.load(json_file)
134
- for sample in data:
135
- for lang in langs:
136
  questionEntity = [
137
  {
138
  "name": str(qe["name"]),
139
  "entityType": qe["entityType"],
140
  "label": qe["label"] if "label" in qe else "",
141
- "mention": qe["mention"],
142
- "span": qe["span"],
 
143
  } for qe in sample["questionEntity"]
144
  ]
145
 
@@ -149,12 +148,13 @@ class Mintaka(datasets.GeneratorBasedBuilder):
149
  elif sample['answer']["answerType"] == "numerical" and "supportingEnt" in sample["answer"]:
150
  answers = sample['answer']['supportingEnt']
151
 
 
152
  def get_label(labels, lang):
153
  if lang in labels:
154
  return labels[lang]
155
  if 'en' in labels:
156
  return labels['en']
157
- return ""
158
 
159
  answerEntity = [
160
  {
@@ -172,7 +172,6 @@ class Mintaka(datasets.GeneratorBasedBuilder):
172
  "complexityType": sample["complexityType"],
173
  "questionEntity": questionEntity,
174
  "answerEntity": answerEntity,
175
-
176
  }
177
 
178
  key_ += 1
19
  _CITATION = """\
20
  @inproceedings{sen-etal-2022-mintaka,
21
  title = "Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering",
22
+ author = "Sen, Priyanka and Aji, Alham Fikri and Saffari, Amir",
 
 
23
  booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
24
  month = oct,
25
  year = "2022",
129
 
130
  with open(file, encoding='utf-8') as json_file:
131
  data = json.load(json_file)
132
+ for lang in langs:
133
+ for sample in data:
134
  questionEntity = [
135
  {
136
  "name": str(qe["name"]),
137
  "entityType": qe["entityType"],
138
  "label": qe["label"] if "label" in qe else "",
139
+ # span only applies for English question
140
+ "mention": qe["mention"] if lang == "en" else None,
141
+ "span": qe["span"] if lang == "en" else [],
142
  } for qe in sample["questionEntity"]
143
  ]
144
 
148
  elif sample['answer']["answerType"] == "numerical" and "supportingEnt" in sample["answer"]:
149
  answers = sample['answer']['supportingEnt']
150
 
151
+ # helper to get language for the corresponding language
152
  def get_label(labels, lang):
153
  if lang in labels:
154
  return labels[lang]
155
  if 'en' in labels:
156
  return labels['en']
157
+ return None
158
 
159
  answerEntity = [
160
  {
172
  "complexityType": sample["complexityType"],
173
  "questionEntity": questionEntity,
174
  "answerEntity": answerEntity,
 
175
  }
176
 
177
  key_ += 1