Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
parquet-converter commited on
Commit
007bc7f
1 Parent(s): ebd2dc1

Update parquet files

Browse files
README.md DELETED
@@ -1,224 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language:
5
- - en
6
- language_creators:
7
- - found
8
- license:
9
- - cc-by-nc-sa-3.0
10
- multilinguality:
11
- - monolingual
12
- pretty_name: Answer Sentence Natural Questions (ASNQ)
13
- size_categories:
14
- - 10M<n<100M
15
- source_datasets:
16
- - extended|natural_questions
17
- task_categories:
18
- - multiple-choice
19
- task_ids:
20
- - multiple-choice-qa
21
- paperswithcode_id: asnq
22
- dataset_info:
23
- features:
24
- - name: question
25
- dtype: string
26
- - name: sentence
27
- dtype: string
28
- - name: label
29
- dtype:
30
- class_label:
31
- names:
32
- 0: neg
33
- 1: pos
34
- - name: sentence_in_long_answer
35
- dtype: bool
36
- - name: short_answer_in_sentence
37
- dtype: bool
38
- splits:
39
- - name: train
40
- num_bytes: 3656881376
41
- num_examples: 20377568
42
- - name: validation
43
- num_bytes: 168005155
44
- num_examples: 930062
45
- download_size: 3563857920
46
- dataset_size: 3824886531
47
- ---
48
-
49
- # Dataset Card for "asnq"
50
-
51
- ## Table of Contents
52
- - [Dataset Description](#dataset-description)
53
- - [Dataset Summary](#dataset-summary)
54
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
55
- - [Languages](#languages)
56
- - [Dataset Structure](#dataset-structure)
57
- - [Data Instances](#data-instances)
58
- - [Data Fields](#data-fields)
59
- - [Data Splits](#data-splits)
60
- - [Dataset Creation](#dataset-creation)
61
- - [Curation Rationale](#curation-rationale)
62
- - [Source Data](#source-data)
63
- - [Annotations](#annotations)
64
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
65
- - [Considerations for Using the Data](#considerations-for-using-the-data)
66
- - [Social Impact of Dataset](#social-impact-of-dataset)
67
- - [Discussion of Biases](#discussion-of-biases)
68
- - [Other Known Limitations](#other-known-limitations)
69
- - [Additional Information](#additional-information)
70
- - [Dataset Curators](#dataset-curators)
71
- - [Licensing Information](#licensing-information)
72
- - [Citation Information](#citation-information)
73
- - [Contributions](#contributions)
74
-
75
- ## Dataset Description
76
-
77
- - **Homepage:** [https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq](https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq)
78
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
79
- - **Paper:** [TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection](https://arxiv.org/abs/1911.04118)
80
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
81
- - **Size of downloaded dataset files:** 3398.76 MB
82
- - **Size of the generated dataset:** 3647.70 MB
83
- - **Total amount of disk used:** 7046.46 MB
84
-
85
- ### Dataset Summary
86
-
87
- ASNQ is a dataset for answer sentence selection derived from
88
- Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).
89
-
90
- Each example contains a question, candidate sentence, label indicating whether or not
91
- the sentence answers the question, and two additional features --
92
- sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the
93
- candidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence.
94
-
95
- For more details please see
96
- https://arxiv.org/abs/1911.04118
97
-
98
- and
99
-
100
- https://research.google/pubs/pub47761/
101
-
102
- ### Supported Tasks and Leaderboards
103
-
104
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
105
-
106
- ### Languages
107
-
108
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
109
-
110
- ## Dataset Structure
111
-
112
- ### Data Instances
113
-
114
- #### default
115
-
116
- - **Size of downloaded dataset files:** 3398.76 MB
117
- - **Size of the generated dataset:** 3647.70 MB
118
- - **Total amount of disk used:** 7046.46 MB
119
-
120
- An example of 'validation' looks as follows.
121
- ```
122
- {
123
- "label": 0,
124
- "question": "when did somewhere over the rainbow come out",
125
- "sentence": "In films and TV shows ( edit ) In the film Third Finger , Left Hand ( 1940 ) with Myrna Loy , Melvyn Douglas , and Raymond Walburn , the tune played throughout the film in short sequences .",
126
- "sentence_in_long_answer": false,
127
- "short_answer_in_sentence": false
128
- }
129
- ```
130
-
131
- ### Data Fields
132
-
133
- The data fields are the same among all splits.
134
-
135
- #### default
136
- - `question`: a `string` feature.
137
- - `sentence`: a `string` feature.
138
- - `label`: a classification label, with possible values including `neg` (0), `pos` (1).
139
- - `sentence_in_long_answer`: a `bool` feature.
140
- - `short_answer_in_sentence`: a `bool` feature.
141
-
142
- ### Data Splits
143
-
144
- | name | train |validation|
145
- |-------|-------:|---------:|
146
- |default|20377568| 930062|
147
-
148
- ## Dataset Creation
149
-
150
- ### Curation Rationale
151
-
152
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
-
154
- ### Source Data
155
-
156
- #### Initial Data Collection and Normalization
157
-
158
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
159
-
160
- #### Who are the source language producers?
161
-
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
-
164
- ### Annotations
165
-
166
- #### Annotation process
167
-
168
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
169
-
170
- #### Who are the annotators?
171
-
172
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
173
-
174
- ### Personal and Sensitive Information
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- ## Considerations for Using the Data
179
-
180
- ### Social Impact of Dataset
181
-
182
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
183
-
184
- ### Discussion of Biases
185
-
186
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
187
-
188
- ### Other Known Limitations
189
-
190
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
191
-
192
- ## Additional Information
193
-
194
- ### Dataset Curators
195
-
196
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
197
-
198
- ### Licensing Information
199
-
200
- The data is made available under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License:
201
- https://github.com/alexa/wqa_tanda/blob/master/LICENSE
202
-
203
- ### Citation Information
204
-
205
- ```
206
- @article{Garg_2020,
207
- title={TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection},
208
- volume={34},
209
- ISSN={2159-5399},
210
- url={http://dx.doi.org/10.1609/AAAI.V34I05.6282},
211
- DOI={10.1609/aaai.v34i05.6282},
212
- number={05},
213
- journal={Proceedings of the AAAI Conference on Artificial Intelligence},
214
- publisher={Association for the Advancement of Artificial Intelligence (AAAI)},
215
- author={Garg, Siddhant and Vu, Thuy and Moschitti, Alessandro},
216
- year={2020},
217
- month={Apr},
218
- pages={7780–7788}
219
- }
220
- ```
221
-
222
- ### Contributions
223
-
224
- Thanks to [@mkserge](https://github.com/mkserge) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
asnq.py DELETED
@@ -1,150 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Answer-Sentence Natural Questions (ASNQ)
16
-
17
- ASNQ is a dataset for answer sentence selection derived from Google's
18
- Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). It converts
19
- NQ's dataset into an AS2 (answer-sentence-selection) format.
20
-
21
- The dataset details can be found in the paper at
22
- https://arxiv.org/abs/1911.04118
23
-
24
- The dataset can be downloaded at
25
- https://wqa-public.s3.amazonaws.com/tanda-aaai-2020/data/asnq.tar
26
- """
27
-
28
-
29
- import csv
30
- import os
31
-
32
- import datasets
33
-
34
-
35
- _CITATION = """\
36
- @article{garg2019tanda,
37
- title={TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection},
38
- author={Siddhant Garg and Thuy Vu and Alessandro Moschitti},
39
- year={2019},
40
- eprint={1911.04118},
41
- }
42
- """
43
-
44
- _DESCRIPTION = """\
45
- ASNQ is a dataset for answer sentence selection derived from
46
- Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).
47
-
48
- Each example contains a question, candidate sentence, label indicating whether or not
49
- the sentence answers the question, and two additional features --
50
- sentence_in_long_answer and short_answer_in_sentence indicating whether ot not the
51
- candidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence.
52
-
53
- For more details please see
54
- https://arxiv.org/pdf/1911.04118.pdf
55
-
56
- and
57
-
58
- https://research.google/pubs/pub47761/
59
- """
60
-
61
- _URL = "https://wqa-public.s3.amazonaws.com/tanda-aaai-2020/data/asnq.tar"
62
-
63
-
64
- class ASNQ(datasets.GeneratorBasedBuilder):
65
- """ASNQ is a dataset for answer sentence selection derived
66
- ASNQ is a dataset for answer sentence selection derived from
67
- Google's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).
68
-
69
- The dataset details can be found in the paper:
70
- https://arxiv.org/abs/1911.04118
71
- """
72
-
73
- VERSION = datasets.Version("1.0.0")
74
-
75
- def _info(self):
76
-
77
- return datasets.DatasetInfo(
78
- # This is the description that will appear on the datasets page.
79
- description=_DESCRIPTION,
80
- # This defines the different columns of the dataset and their types
81
- features=datasets.Features(
82
- {
83
- "question": datasets.Value("string"),
84
- "sentence": datasets.Value("string"),
85
- "label": datasets.ClassLabel(names=["neg", "pos"]),
86
- "sentence_in_long_answer": datasets.Value("bool"),
87
- "short_answer_in_sentence": datasets.Value("bool"),
88
- }
89
- ),
90
- # No default supervised_keys
91
- supervised_keys=None,
92
- # Homepage of the dataset for documentation
93
- homepage="https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq",
94
- citation=_CITATION,
95
- )
96
-
97
- def _split_generators(self, dl_manager):
98
- """Returns SplitGenerators."""
99
- # dl_manager is a datasets.download.DownloadManager that can be used to
100
- # download and extract URLs
101
- dl_dir = dl_manager.download_and_extract(_URL)
102
- data_dir = os.path.join(dl_dir, "data", "asnq")
103
- return [
104
- datasets.SplitGenerator(
105
- name=datasets.Split.TRAIN,
106
- # These kwargs will be passed to _generate_examples
107
- gen_kwargs={
108
- "filepath": os.path.join(data_dir, "train.tsv"),
109
- "split": "train",
110
- },
111
- ),
112
- datasets.SplitGenerator(
113
- name=datasets.Split.VALIDATION,
114
- # These kwargs will be passed to _generate_examples
115
- gen_kwargs={
116
- "filepath": os.path.join(data_dir, "dev.tsv"),
117
- "split": "dev",
118
- },
119
- ),
120
- ]
121
-
122
- def _generate_examples(self, filepath, split):
123
- """Yields examples.
124
-
125
- Original dataset contains labels '1', '2', '3' and '4', with labels
126
- '1', '2' and '3' considered negative (sentence does not answer the question),
127
- and label '4' considered positive (sentence does answer the question).
128
- We map these labels to two classes, returning the other properties as additional
129
- features."""
130
-
131
- # Mapping of dataset's original labels to a tuple of
132
- # (label, sentence_in_long_answer, short_answer_in_sentence)
133
- label_map = {
134
- "1": ("neg", False, False),
135
- "2": ("neg", False, True),
136
- "3": ("neg", True, False),
137
- "4": ("pos", True, True),
138
- }
139
- with open(filepath, encoding="utf-8") as tsvfile:
140
- tsvreader = csv.reader(tsvfile, delimiter="\t")
141
- for id_, row in enumerate(tsvreader):
142
- question, sentence, orig_label = row
143
- label, sentence_in_long_answer, short_answer_in_sentence = label_map[orig_label]
144
- yield id_, {
145
- "question": question,
146
- "sentence": sentence,
147
- "label": label,
148
- "sentence_in_long_answer": sentence_in_long_answer,
149
- "short_answer_in_sentence": short_answer_in_sentence,
150
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "ASNQ is a dataset for answer sentence selection derived from\nGoogle's Natural Questions (NQ) dataset (Kwiatkowski et al. 2019).\n\nEach example contains a question, candidate sentence, label indicating whether or not\nthe sentence answers the question, and two additional features -- \nsentence_in_long_answer and short_answer_in_sentence indicating whether ot not the \ncandidate sentence is contained in the long_answer and if the short_answer is in the candidate sentence.\n\nFor more details please see \nhttps://arxiv.org/pdf/1911.04118.pdf\n\nand \n\nhttps://research.google/pubs/pub47761/\n", "citation": "@article{garg2019tanda,\n title={TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection},\n author={Siddhant Garg and Thuy Vu and Alessandro Moschitti},\n year={2019},\n eprint={1911.04118},\n}\n", "homepage": "https://github.com/alexa/wqa_tanda#answer-sentence-natural-questions-asnq", "license": "", "features": {"question": {"dtype": "string", "id": null, "_type": "Value"}, "sentence": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 2, "names": ["neg", "pos"], "names_file": null, "id": null, "_type": "ClassLabel"}, "sentence_in_long_answer": {"dtype": "bool", "id": null, "_type": "Value"}, "short_answer_in_sentence": {"dtype": "bool", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "asnq", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 3656881376, "num_examples": 20377568, "dataset_name": "asnq"}, "validation": {"name": "validation", "num_bytes": 168005155, "num_examples": 930062, "dataset_name": "asnq"}}, "download_checksums": {"https://wqa-public.s3.amazonaws.com/tanda-aaai-2020/data/asnq.tar": {"num_bytes": 3563857920, "checksum": "4211d3e507e7cfa345a9eea3c5222b7d79fd963cf27407555c5558c37344ddf1"}}, "download_size": 3563857920, "post_processing_size": null, "dataset_size": 3824886531, "size_in_bytes": 7388744451}}
 
 
default/asnq-train-00000-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:946a680c82fa92f0efd56e0aae379b32c4e190bb8521e72f90b28d5a4a84d667
3
+ size 327334748
default/asnq-train-00001-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11be1258d9ac6a1b33070be81a1c305791e6f255ea34886f21fd1e798ecb7ec6
3
+ size 327389864
default/asnq-train-00002-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b59fc6d4b0cffa5c855a08cb4ba2278c16b61ab523fbf035a56879cb6549e03
3
+ size 327187584
default/asnq-train-00003-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa7c9b67da7e0e8d88a89e91a263c97b69664aaf8c63f40908aa4dce6d0096e6
3
+ size 327236746
default/asnq-train-00004-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ac3f17f7a6a226aa8c4081a397e4ff5a9b649a270b498911cc5f09ebcddef7b0
3
+ size 327318754
default/asnq-train-00005-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a00241fb25815b9449937c2fe7bfc3d33e8a4596b69a8317846495eee33ffc4
3
+ size 327287637
default/asnq-train-00006-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84251e7a5d8303bcf57ede22d16fd1cd5d02f1eafa956e16510c5396a0e865ba
3
+ size 327390138
default/asnq-train-00007-of-00008.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff842712471beb47a4b80cd96b2069c121d64512c51df2f94d5fb93995b6be11
3
+ size 102243979
default/asnq-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08ced751e17f0a2a37db435273e8ff0b598f5641d14b97bd7a31fcc9856c008a
3
+ size 103511033