Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
ArXiv:
Tags:
License:
system HF staff commited on
Commit
e3ae111
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,218 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - cc-by-sa-3-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - extended|natural_questions
16
+ - original
17
+ task_categories:
18
+ - question-answering
19
+ task_ids:
20
+ - open-domain-qa
21
+ ---
22
+
23
+ # Dataset Card for AmbigQA: Answering Ambiguous Open-domain Questions
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-instances)
33
+ - [Data Splits](#data-instances)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+
48
+ ## Dataset Description
49
+
50
+ - [**Homepage:**](https://nlp.cs.washington.edu/ambigqa/)
51
+ - [**Repository:**](https://github.com/shmsw25/AmbigQA)
52
+ - [**Paper:**](https://arxiv.org/pdf/2004.10645.pdf)
53
+
54
+ ### Dataset Summary
55
+
56
+ AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
57
+ 14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
58
+ We provide two distributions of our new dataset AmbigNQ: a `full` version with all annotation metadata and a `light` version with only inputs and outputs.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ `question-answering`
63
+
64
+ ### Languages
65
+
66
+ English
67
+
68
+ ## Dataset Structure
69
+ ### Data Instances
70
+
71
+ An example from the data set looks as follows:
72
+ ```
73
+ {'annotations': {'answer': [[]],
74
+ 'qaPairs': [{'answer': [['April 19, 1987'], ['December 17, 1989']],
75
+ 'question': ['When did the Simpsons first air on television as an animated short on the Tracey Ullman Show?',
76
+ 'When did the Simpsons first air as a half-hour prime time show?']}],
77
+ 'type': ['multipleQAs']},
78
+ 'id': '-4469503464110108318',
79
+ 'nq_answer': ['December 17 , 1989'],
80
+ 'nq_doc_title': 'The Simpsons',
81
+ 'question': 'When did the simpsons first air on television?',
82
+ 'used_queries': {'query': ['When did the simpsons first air on television?'],
83
+ 'results': [{'snippet': ['The <b>Simpsons</b> is an American animated <b>television</b> sitcom starring the animated \nSimpson family, ... Since its <b>debut</b> on December 17, 1989, the show <b>has</b> \nbroadcast 673 episodes and its 30th season started ... The <b>Simpsons first</b> season \n<b>was</b> the Fox network&#39;s <b>first TV</b> series to rank among a season&#39;s top 30 highest-\nrated shows.',
84
+ 'The <b>Simpsons</b> is an American animated sitcom created by Matt Groening for the \nFox ... Since its <b>debut</b> on December 17, 1989, 674 episodes of The <b>Simpsons</b> \nhave been broadcast. ... When producer James L. Brooks <b>was</b> working on the \n<b>television</b> variety show The Tracey Ullman Show, he decided to include small \nanimated&nbsp;...',
85
+ '... in shorts from The Tracey Ullman Show as their <b>television debut</b> in 1987. The \n<b>Simpsons</b> shorts are a series of animated shorts that <b>aired</b> as a recurring \nsegment on Fox variety <b>television</b> series The Tracey ... The final short to <b>air was</b> &quot;\n<b>TV Simpsons</b>&quot;, originally airing on May 14, 1989. The <b>Simpsons</b> later debuted on\n&nbsp;...',
86
+ 'The <b>first</b> season of the American animated <b>television</b> series The <b>Simpsons</b> \noriginally <b>aired</b> on the Fox network between December 17, 1989, and May 13, \n1990, beginning with the Christmas special &quot;<b>Simpsons</b> Roasting on an Open Fire\n&quot;. The executive producers for the <b>first</b> production season <b>were</b> Matt Groening,&nbsp;...',
87
+ 'The <b>Simpsons</b> is an American animated <b>television</b> sitcom created by Matt \nGroening for the Fox ... Since its <b>debut</b> on December 17, 1989, The <b>Simpsons</b> \n<b>has</b> broadcast 674 episodes. The show holds several American <b>television</b> \nlongevity&nbsp;...',
88
+ 'The opening sequence of the American animated <b>television</b> series The <b>Simpsons</b> \nis among the most popular opening sequences in <b>television</b> and is accompanied \nby one of <b>television&#39;s</b> most recognizable theme songs. The <b>first</b> episode to use \nthis intro <b>was</b> the series&#39; second episode &quot;Bart the ... <b>was</b> the <b>first</b> episode of The \n<b>Simpsons</b> to <b>air</b> in 720p high-definition <b>television</b>,&nbsp;...',
89
+ '&quot;<b>Simpsons</b> Roasting on an Open Fire&quot;, titled onscreen as &quot;The <b>Simpsons</b> \nChristmas Special&quot;, is the premiere episode of the American animated <b>TV</b> series \nThe <b>Simpsons</b>, ... The show <b>was</b> originally intended to <b>debut</b> earlier in 1989 with &quot;\nSome Enchanted Evening&quot;, but due to animation problems with that episode, the \nshow&nbsp;...',
90
+ '&quot;Stark Raving Dad&quot; is the <b>first</b> episode of the third season of the American \nanimated <b>television</b> series The <b>Simpsons</b>. It <b>first aired</b> on the Fox network in the \nUnited States on September 19, 1991. ... The <b>Simpsons was</b> the second highest \nrated show on Fox the week it <b>aired</b>, behind Married... with Children. &quot;Stark \nRaving Dad,&quot;&nbsp;...',
91
+ 'The <b>Simpsons</b>&#39; twentieth season <b>aired</b> on Fox from September 28, 2008 to May \n17, 2009. With this season, the show tied Gunsmoke as the longest-running \nAmerican primetime <b>television</b> series in terms of total number ... It <b>was</b> the <b>first</b>-\never episode of the show to <b>air</b> in Europe before being seen in the United States.',
92
+ 'The animated <b>TV</b> show The <b>Simpsons</b> is an American English language \nanimated sitcom which ... The <b>Simpsons was</b> dubbed for the <b>first</b> time in Punjabi \nand <b>aired</b> on Geo <b>TV</b> in Pakistan. The name of the localised Punjabi version is \nTedi Sim&nbsp;...'],
93
+ 'title': ['History of The Simpsons',
94
+ 'The Simpsons',
95
+ 'The Simpsons shorts',
96
+ 'The Simpsons (season 1)',
97
+ 'List of The Simpsons episodes',
98
+ 'The Simpsons opening sequence',
99
+ 'Simpsons Roasting on an Open Fire',
100
+ 'Stark Raving Dad',
101
+ 'The Simpsons (season 20)',
102
+ 'Non-English versions of The Simpsons']}]},
103
+ 'viewed_doc_titles': ['The Simpsons']}
104
+ ```
105
+
106
+ ### Data Fields
107
+
108
+ Full
109
+ ```
110
+ {'id': Value(dtype='string', id=None),
111
+ 'question': Value(dtype='string', id=None),
112
+ 'annotations': Sequence(feature={'type': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'qaPairs': Sequence(feature={'question': Value(dtype='string', id=None), 'answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, length=-1, id=None)}, length=-1, id=None),
113
+ 'viewed_doc_titles': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
114
+ 'used_queries': Sequence(feature={'query': Value(dtype='string', id=None), 'results': Sequence(feature={'title': Value(dtype='string', id=None), 'snippet': Value(dtype='string', id=None)}, length=-1, id=None)}, length=-1, id=None),
115
+ 'nq_answer': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None),
116
+ 'nq_doc_title': Value(dtype='string', id=None)}
117
+ ```
118
+ In the original data format `annotations` have different keys depending on the `type` field = `singleAnswer` or `multipleQAs`. But this implementation uses an empty list `[]` for the unavailable keys
119
+
120
+ please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
121
+
122
+ ```
123
+ for example in train_light_dataset:
124
+ for i,t in enumerate(example['annotations']['type']):
125
+ if t =='singleAnswer':
126
+ # use the example['annotations']['answer'][i]
127
+ # example['annotations']['qaPairs'][i] - > is []
128
+ print(example['annotations']['answer'][i])
129
+ else:
130
+ # use the example['annotations']['qaPairs'][i]
131
+ # example['annotations']['answer'][i] - > is []
132
+ print(example['annotations']['qaPairs'][i])
133
+ ```
134
+
135
+ please refer to Dataset Contents(https://github.com/shmsw25/AmbigQA#dataset-contents) for more details.
136
+
137
+ Light version only has `id`, `question`, `annotations` fields
138
+
139
+ ### Data Splits
140
+
141
+ - train: 10036
142
+ - validation: 2002
143
+
144
+
145
+ ## Dataset Creation
146
+
147
+ ### Curation Rationale
148
+
149
+ [More Information Needed]
150
+
151
+ ### Source Data
152
+
153
+ - Wikipedia
154
+ - NQ-open:
155
+ ```
156
+ @article{ kwiatkowski2019natural,
157
+ title={ Natural questions: a benchmark for question answering research},
158
+ author={ Kwiatkowski, Tom and Palomaki, Jennimaria and Redfield, Olivia and Collins, Michael and Parikh, Ankur and Alberti, Chris and Epstein, Danielle and Polosukhin, Illia and Devlin, Jacob and Lee, Kenton and others },
159
+ journal={ Transactions of the Association for Computational Linguistics },
160
+ year={ 2019 }
161
+ }
162
+ ```
163
+
164
+ #### Initial Data Collection and Normalization
165
+
166
+ [More Information Needed]
167
+
168
+ #### Who are the source language producers?
169
+
170
+ [More Information Needed]
171
+
172
+ ### Annotations
173
+
174
+ #### Annotation process
175
+
176
+ [More Information Needed]
177
+
178
+ #### Who are the annotators?
179
+
180
+ [More Information Needed]
181
+
182
+ ### Personal and Sensitive Information
183
+
184
+ [More Information Needed]
185
+
186
+ ## Considerations for Using the Data
187
+
188
+ ### Social Impact of Dataset
189
+
190
+ [More Information Needed]
191
+
192
+ ### Discussion of Biases
193
+
194
+ [More Information Needed]
195
+
196
+ ### Other Known Limitations
197
+
198
+ [More Information Needed]
199
+
200
+ ## Additional Information
201
+
202
+ ### Dataset Curators
203
+
204
+ [More Information Needed]
205
+
206
+ ### Licensing Information
207
+
208
+ [CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/)
209
+
210
+ ### Citation Information
211
+ ```
212
+ @inproceedings{ min2020ambigqa,
213
+ title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },
214
+ author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
215
+ booktitle={ EMNLP },
216
+ year={2020}
217
+ }
218
+ ```
ambig_qa.py ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """AmbigQA: Answering Ambiguous Open-domain Questions"""
17
+
18
+ from __future__ import absolute_import, division, print_function
19
+
20
+ import json
21
+ import os
22
+
23
+ import datasets
24
+
25
+
26
+ _CITATION = """\
27
+ @inproceedings{ min2020ambigqa,
28
+ title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },
29
+ author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },
30
+ booktitle={ EMNLP },
31
+ year={2020}
32
+ }
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with
37
+ 14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.
38
+ We provide two distributions of our new dataset AmbigNQ: a full version with all annotation metadata and a light version with only inputs and outputs.
39
+ """
40
+ _HOMEPAGE = "https://nlp.cs.washington.edu/ambigqa/"
41
+ _LICENSE = "CC BY-SA 3.0"
42
+
43
+ _URL = "https://nlp.cs.washington.edu/ambigqa/data/"
44
+ _URLS = {
45
+ "light": _URL + "ambignq_light.zip",
46
+ "full": _URL + "ambignq.zip",
47
+ }
48
+
49
+
50
+ class AmbigQa(datasets.GeneratorBasedBuilder):
51
+ """AmbigQA dataset"""
52
+
53
+ VERSION = datasets.Version("1.0.0")
54
+ BUILDER_CONFIGS = [
55
+ datasets.BuilderConfig(
56
+ name="light",
57
+ version=VERSION,
58
+ description="AmbigNQ light version with only inputs and outputs",
59
+ ),
60
+ datasets.BuilderConfig(
61
+ name="full",
62
+ version=VERSION,
63
+ description="AmbigNQ full version with all annotation metadata",
64
+ ),
65
+ ]
66
+ DEFAULT_CONFIG_NAME = "full"
67
+
68
+ def _info(self):
69
+ features_dict = {
70
+ "id": datasets.Value("string"),
71
+ "question": datasets.Value("string"),
72
+ "annotations": datasets.features.Sequence(
73
+ {
74
+ "type": datasets.Value("string"), # datasets.ClassLabel(names = ["singleAnswer","multipleQAs"])
75
+ "answer": datasets.features.Sequence(datasets.Value("string")),
76
+ "qaPairs": datasets.features.Sequence(
77
+ {
78
+ "question": datasets.Value("string"),
79
+ "answer": datasets.features.Sequence(datasets.Value("string")),
80
+ }
81
+ ),
82
+ }
83
+ ),
84
+ }
85
+ if self.config.name == "full":
86
+
87
+ detail_features = {
88
+ "viewed_doc_titles": datasets.features.Sequence(datasets.Value("string")),
89
+ "used_queries": datasets.features.Sequence(
90
+ {
91
+ "query": datasets.Value("string"),
92
+ "results": datasets.features.Sequence(
93
+ {
94
+ "title": datasets.Value("string"),
95
+ "snippet": datasets.Value("string"),
96
+ }
97
+ ),
98
+ }
99
+ ),
100
+ "nq_answer": datasets.features.Sequence(datasets.Value("string")),
101
+ "nq_doc_title": datasets.Value("string"),
102
+ }
103
+ features_dict.update(detail_features)
104
+
105
+ features = datasets.Features(features_dict)
106
+
107
+ return datasets.DatasetInfo(
108
+ description=_DESCRIPTION,
109
+ features=features,
110
+ supervised_keys=None,
111
+ homepage=_HOMEPAGE,
112
+ license=_LICENSE,
113
+ citation=_CITATION,
114
+ )
115
+
116
+ def _split_generators(self, dl_manager):
117
+ """Returns SplitGenerators."""
118
+ # download and extract URLs
119
+ urls_to_download = _URLS
120
+ downloaded_files = dl_manager.download_and_extract(urls_to_download)
121
+
122
+ train_file_name = "train.json" if self.config.name == "full" else "train_light.json"
123
+ dev_file_name = "dev.json" if self.config.name == "full" else "dev_light.json"
124
+
125
+ return [
126
+ datasets.SplitGenerator(
127
+ name=datasets.Split.TRAIN,
128
+ gen_kwargs={"filepath": os.path.join(downloaded_files[self.config.name], train_file_name)},
129
+ ),
130
+ datasets.SplitGenerator(
131
+ name=datasets.Split.VALIDATION,
132
+ gen_kwargs={"filepath": os.path.join(downloaded_files[self.config.name], dev_file_name)},
133
+ ),
134
+ ]
135
+
136
+ def _generate_examples(self, filepath):
137
+ """Yields examples."""
138
+
139
+ with open(filepath, encoding="utf-8") as f:
140
+ data = json.load(f)
141
+ for example in data:
142
+ id_ = example["id"]
143
+ annotations = example["annotations"]
144
+ # Add this because we cannot have None values (all keys in the schema should be present)
145
+ for an in annotations:
146
+ if "qaPairs" not in an:
147
+ an["qaPairs"] = []
148
+ if "answer" not in an:
149
+ an["answer"] = []
150
+
151
+ yield id_, example
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"light": {"description": "AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with\n14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.\nWe provide two distributions of our new dataset AmbigNQ: a full version with all annotation metadata and a light version with only inputs and outputs.\n", "citation": "@inproceedings{ min2020ambigqa,\n title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },\n author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },\n booktitle={ EMNLP },\n year={2020}\n}\n", "homepage": "https://nlp.cs.washington.edu/ambigqa/", "license": "CC BY-SA 3.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": {"feature": {"type": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "qaPairs": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ambig_qa", "config_name": "light", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 2739732, "num_examples": 10036, "dataset_name": "ambig_qa"}, "validation": {"name": "validation", "num_bytes": 805808, "num_examples": 2002, "dataset_name": "ambig_qa"}}, "download_checksums": {"https://nlp.cs.washington.edu/ambigqa/data/ambignq_light.zip": {"num_bytes": 1061383, "checksum": "3f5dada69dec05cef1533a64945cd7bafde1aa94b0cdd6fa9a22f881206220db"}, "https://nlp.cs.washington.edu/ambigqa/data/ambignq.zip": {"num_bytes": 18639517, "checksum": "e85cec5909f076c6f584322c7f05cae44dcacaec93758c110a26fcceaa8da0ce"}}, "download_size": 19700900, "post_processing_size": null, "dataset_size": 3545540, "size_in_bytes": 23246440}, "full": {"description": "AmbigNQ, a dataset covering 14,042 questions from NQ-open, an existing open-domain QA benchmark. We find that over half of the questions in NQ-open are ambiguous. The types of ambiguity are diverse and sometimes subtle, many of which are only apparent after examining evidence provided by a very large text corpus. AMBIGNQ, a dataset with\n14,042 annotations on NQ-OPEN questions containing diverse types of ambiguity.\nWe provide two distributions of our new dataset AmbigNQ: a full version with all annotation metadata and a light version with only inputs and outputs.\n", "citation": "@inproceedings{ min2020ambigqa,\n title={ {A}mbig{QA}: Answering Ambiguous Open-domain Questions },\n author={ Min, Sewon and Michael, Julian and Hajishirzi, Hannaneh and Zettlemoyer, Luke },\n booktitle={ EMNLP },\n year={2020}\n}\n", "homepage": "https://nlp.cs.washington.edu/ambigqa/", "license": "CC BY-SA 3.0", "features": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "annotations": {"feature": {"type": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "qaPairs": {"feature": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "viewed_doc_titles": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "used_queries": {"feature": {"query": {"dtype": "string", "id": null, "_type": "Value"}, "results": {"feature": {"title": {"dtype": "string", "id": null, "_type": "Value"}, "snippet": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "nq_answer": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "nq_doc_title": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "ambig_qa", "config_name": "full", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 43538733, "num_examples": 10036, "dataset_name": "ambig_qa"}, "validation": {"name": "validation", "num_bytes": 15383368, "num_examples": 2002, "dataset_name": "ambig_qa"}}, "download_checksums": {"https://nlp.cs.washington.edu/ambigqa/data/ambignq_light.zip": {"num_bytes": 1061383, "checksum": "3f5dada69dec05cef1533a64945cd7bafde1aa94b0cdd6fa9a22f881206220db"}, "https://nlp.cs.washington.edu/ambigqa/data/ambignq.zip": {"num_bytes": 18639517, "checksum": "e85cec5909f076c6f584322c7f05cae44dcacaec93758c110a26fcceaa8da0ce"}}, "download_size": 19700900, "post_processing_size": null, "dataset_size": 58922101, "size_in_bytes": 78623001}}
dummy/full/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5037dd672daadfabbf41c64b5fe953ef9f551c9ee2efff9130a55b95c26b9225
3
+ size 19021
dummy/light/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a5c8a6b2bbc881fd03963cf2152cf314c58b374979b84be516a4dd5fc225763
3
+ size 19021