Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
machine-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
License:
system HF staff commited on
Commit
f6d6f92
0 Parent(s):

Update files from the datasets library (from 1.7.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.7.0

Files changed (5) hide show
  1. .gitattributes +27 -0
  2. README.md +193 -0
  3. dataset_infos.json +1 -0
  4. dummy/1.1.0/dummy_data.zip +3 -0
  5. gooaq.py +121 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - machine-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1M<n<10M
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - open-domain-qa
20
+ paperswithcode_id: gooaq
21
+ ---
22
+
23
+ # Dataset Card for GooAQ
24
+
25
+ ## Table of Contents
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq)
52
+ - **Repository:** [GooAQ 🥑: Google Answers to Google Questions!](https://github.com/allenai/gooaq)
53
+ - **Paper:** [GOOAQ: Open Question Answering with Diverse Answer Types](https://arxiv.org/abs/2104.08727)
54
+ - **Point of Contact:** [Daniel Khashabi](danielk@allenai.org)
55
+
56
+ ### Dataset Summary
57
+
58
+ GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over
59
+ 5 million questions and 3 million answers collected from Google. GooAQ questions are collected
60
+ semi-automatically from the Google search engine using its autocomplete feature. This results in
61
+ naturalistic questions of practical interest that are nonetheless short and expressed using simple
62
+ language. GooAQ answers are mined from Google's responses to our collected questions, specifically from
63
+ the answer boxes in the search results. This yields a rich space of answer types, containing both
64
+ textual answers (short and long) as well as more structured ones such as collections.
65
+
66
+ ### Supported Tasks and Leaderboards
67
+
68
+ [More Information Needed]
69
+
70
+ ### Languages
71
+
72
+ The dataset contains samples in English only.
73
+
74
+ ## Dataset Structure
75
+
76
+ ### Data Instances
77
+
78
+ Each row of the data file should look like this:
79
+ ```
80
+ {
81
+ "id": 3339543,
82
+ "question": "what is the difference between collagen and whey protein?",
83
+ "short_answer": None,
84
+ "answer": "The main differences between the amino acid profiles of whey and collagen are that whey contains all 9 essential amino acids, while collagen only has 8. ... Collagen is a fibrous protein found in the skin, cartilage, and bones of animals whereas whey comes from milk.",
85
+ "answer_type": "feat_snip"
86
+ }
87
+ ```
88
+ where the questions `question` are collected via Google auto-complete.
89
+ The answers responses (`short_answer` and `answer`) were collected from Google's answer boxes.
90
+ The answer types (`answer_type`) are inferred based on the html content of Google's response.
91
+ Here is the dominant types in the current dataset:
92
+ - `feat_snip`: explanatory responses; the majoriy the question/responses are of this type.
93
+ - `collection`: list responses (e.g., steps to accomplish something).
94
+ - `knowledge`: typically short responses for knowledge seeking questions.
95
+ - `unit_conv`: questions about converting units.
96
+ - `time_conv`: questions about converting times.
97
+ - `curr_conv`: questions about converting currencies.
98
+
99
+ Dataset instances which are not part of dominant types are marked with -1 label.
100
+
101
+ ### Data Fields
102
+
103
+ - `id`: an `int` feature.
104
+ - `question`: a `string` feature.
105
+ - `short_answer`: a `string` feature (could be None as well in some cases).
106
+ - `answer`: a `string` feature (could be None as well in some cases).
107
+ - `answer_type`: a `string` feature.
108
+
109
+ ### Data Splits
110
+
111
+ This dataset is split into train set. Number of samples in train set is given below:
112
+
113
+ | | Train |
114
+ | ----- | ------ |
115
+ | Gooaq | 5030530|
116
+
117
+ ## Dataset Creation
118
+
119
+ ### Curation Rationale
120
+
121
+ While day-to-day questions come with a variety of answer types, the current question-answering (QA)
122
+ literature has failed to adequately address the answer diversity of questions. Many of the everyday questions
123
+ that humans deal with and pose to search engines have a more diverse set of responses. Their answer can be a multi-sentence description (a snippet) (e.g., ‘what is’ or ‘can you’ questions), a collection of items such as ingredients (‘what are’, ‘things to’) or of steps towards a goal such as unlocking a phone (‘how to’), etc. Even when the answer is short, it can have richer types, e.g., unit conversion, time zone conversion, or various kinds of knowledge look-up (‘how much’, ‘when is’, etc.).
124
+ Such answer type diversity is not represented in any existing dataset.
125
+
126
+ ### Source Data
127
+
128
+ #### Initial Data Collection and Normalization
129
+
130
+ Construction this dataset involved two main steps, extracting questions from search auto-complete and extracting answers from answer boxes.
131
+
132
+ 1) Query Extraction: To extract a rich yet natural set of questions they used Google auto-completion. They start with a seed set of question terms (e.g., “who”, “where”, etc.). They bootstrap based on this set, by repeatedly querying prefixes of previously extracted questions, in order to discover longer and richer sets of questions. Such questions extracted from the autocomplete algorithm are highly reflective of popular questions posed by users of Google. They filter out any questions shorter than 5 tokens as they are often in-complete questions. This process yields over ∼5M questions, which were collected over a span of 6 months. The average length of the questions is about 8 tokens.
133
+
134
+ 2) Answer Extraction: They rely on the Google answer boxes shown on top of the search results when the questions are issued to Google. There are a variety of answer boxes. The most common kind involves highlighted sentences (extracted from various websites) that contain the answer to a given question. These form the snippet and collection answers in GOOAQ. In some cases, the answer box shows the answer directly, possibly in addition to the textual snippet. These form theshort answers in GOOAQ.
135
+
136
+ They first scrape the search results for all questions. This is the main extraction bottleneck, which was done over a span of 2 months. Subsequently, they extract answer strings from the HTML content of the search results. Answer types are also inferred at this stage, based on the HTML tags around the answer.
137
+
138
+ #### Who are the source language producers?
139
+
140
+ Answered above.
141
+
142
+ ### Annotations
143
+
144
+ #### Annotation process
145
+
146
+ Answered in above section.
147
+
148
+ #### Who are the annotators?
149
+
150
+ Since their task is focused on English, they required workers to be based in a country with a population predominantly of native English speakers (e.g., USA, Canada, UK, and Australia) and have completed at least 5000 HITs with ≥ 99% assignment approval rate. Additionally, they have a qualification test with half-a-dozen questions all of which need to be answered correctly by the annotators.
151
+
152
+ ### Personal and Sensitive Information
153
+
154
+ [More Information Needed]
155
+
156
+ ## Considerations for Using the Data
157
+
158
+ ### Social Impact of Dataset
159
+
160
+ [More Information Needed]
161
+
162
+ ### Discussion of Biases
163
+
164
+ To prevent biased judgements, they also ask the annotators to avoid using Google search (which is what they used when mined GOOAQ) when annotating the quality of shown instances.
165
+
166
+ ### Other Known Limitations
167
+
168
+ [More Information Needed]
169
+
170
+ ## Additional Information
171
+
172
+ ### Dataset Curators
173
+
174
+ List the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.
175
+
176
+ ### Licensing Information
177
+
178
+ Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License.
179
+
180
+ ### Citation Information
181
+
182
+ ```
183
+ @article{gooaq2021,
184
+ title={GooAQ: Open Question Answering with Diverse Answer Types},
185
+ author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris},
186
+ journal={arXiv preprint},
187
+ year={2021}
188
+ }
189
+ ```
190
+
191
+ ### Contributions
192
+
193
+ Thanks to [@bhavitvyamalik](https://github.com/bhavitvyamalik) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over\n5 million questions and 3 million answers collected from Google. GooAQ questions are collected\nsemi-automatically from the Google search engine using its autocomplete feature. This results in\nnaturalistic questions of practical interest that are nonetheless short and expressed using simple\nlanguage. GooAQ answers are mined from Google's responses to our collected questions, specifically from\nthe answer boxes in the search results. This yields a rich space of answer types, containing both\ntextual answers (short and long) as well as more structured ones such as collections.\n", "citation": "@article{gooaq2021,\n title={GooAQ: Open Question Answering with Diverse Answer Types},\n author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris},\n journal={arXiv preprint},\n year={2021}\n}\n", "homepage": "https://github.com/allenai/gooaq", "license": "Licensed under the Apache License, Version 2.0", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "short_answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_type": {"num_classes": 6, "names": ["feat_snip", "collection", "knowledge", "unit_conv", "time_conv", "curr_conv"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "gooaq", "config_name": "default", "version": {"version_str": "1.1.0", "description": null, "major": 1, "minor": 1, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1102827066, "num_examples": 5030530, "dataset_name": "gooaq"}}, "download_checksums": {"https://github.com/allenai/gooaq/raw/main/data/qoogle.jsonl": {"num_bytes": 1467162788, "checksum": "7c57029dbac90db21c7abcb3dcdbf9cd9f83f9a1d24815a2d8c0663fe13e4a17"}}, "download_size": 1467162788, "post_processing_size": null, "dataset_size": 1102827066, "size_in_bytes": 2569989854}}
dummy/1.1.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:996f63a67e25d9b12a5f5c441d638071a9d198e6915f658cb1550ab4361a1e0b
3
+ size 428
gooaq.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """GooAQ - Question-answers, collected from Google"""
16
+
17
+
18
+ import json
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @article{gooaq2021,
25
+ title={GooAQ: Open Question Answering with Diverse Answer Types},
26
+ author={Khashabi, Daniel and Ng, Amos and Khot, Tushar and Sabharwal, Ashish and Hajishirzi, Hannaneh and Callison-Burch, Chris},
27
+ journal={arXiv preprint},
28
+ year={2021}
29
+ }
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ GooAQ is a large-scale dataset with a variety of answer types. This dataset contains over
34
+ 5 million questions and 3 million answers collected from Google. GooAQ questions are collected
35
+ semi-automatically from the Google search engine using its autocomplete feature. This results in
36
+ naturalistic questions of practical interest that are nonetheless short and expressed using simple
37
+ language. GooAQ answers are mined from Google's responses to our collected questions, specifically from
38
+ the answer boxes in the search results. This yields a rich space of answer types, containing both
39
+ textual answers (short and long) as well as more structured ones such as collections.
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/allenai/gooaq"
43
+
44
+ _LICENSE = "Licensed under the Apache License, Version 2.0"
45
+
46
+ _URL = "https://github.com/allenai/gooaq/raw/main/data/qoogle.jsonl"
47
+
48
+
49
+ class Gooaq(datasets.GeneratorBasedBuilder):
50
+ """GooAQ - Question-answers, collected from Google"""
51
+
52
+ VERSION = datasets.Version("1.1.0")
53
+
54
+ def _info(self):
55
+ features = datasets.Features(
56
+ {
57
+ "id": datasets.Value("int32"),
58
+ "question": datasets.Value("string"),
59
+ "short_answer": datasets.Value("string"),
60
+ "answer": datasets.Value("string"),
61
+ "answer_type": datasets.features.ClassLabel(
62
+ names=["feat_snip", "collection", "knowledge", "unit_conv", "time_conv", "curr_conv"]
63
+ ),
64
+ }
65
+ )
66
+ return datasets.DatasetInfo(
67
+ # This is the description that will appear on the datasets page.
68
+ description=_DESCRIPTION,
69
+ # This defines the different columns of the dataset and their types
70
+ features=features, # Here we define them above because they are different between the two configurations
71
+ # If there's a common (input, target) tuple from the features,
72
+ # specify them here. They'll be used if as_supervised=True in
73
+ # builder.as_dataset.
74
+ supervised_keys=None,
75
+ # Homepage of the dataset for documentation
76
+ homepage=_HOMEPAGE,
77
+ # License for the dataset if available
78
+ license=_LICENSE,
79
+ # Citation for the dataset
80
+ citation=_CITATION,
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+ """Returns SplitGenerators."""
85
+
86
+ data_dir = dl_manager.download(_URL)
87
+ return [
88
+ datasets.SplitGenerator(
89
+ name=datasets.Split.TRAIN,
90
+ gen_kwargs={
91
+ "filepath": data_dir,
92
+ "split": "train",
93
+ },
94
+ ),
95
+ ]
96
+
97
+ def _generate_examples(
98
+ self, filepath, split # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
99
+ ):
100
+ dominant_classes = ["feat_snip", "collection", "knowledge", "unit_conv", "time_conv", "curr_conv"]
101
+
102
+ with open(filepath, encoding="utf-8") as f:
103
+ for id_, row in enumerate(f):
104
+ data = json.loads(row)
105
+
106
+ if data["answer_type"] not in dominant_classes:
107
+ yield id_, {
108
+ "id": data["id"],
109
+ "question": data["question"],
110
+ "short_answer": data["short_answer"],
111
+ "answer": data["answer"],
112
+ "answer_type": -1,
113
+ }
114
+ else:
115
+ yield id_, {
116
+ "id": data["id"],
117
+ "question": data["question"],
118
+ "short_answer": data["short_answer"],
119
+ "answer": data["answer"],
120
+ "answer_type": data["answer_type"],
121
+ }