Datasets:

Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
expert-generated
ArXiv:
Tags:
License:
Sagnik Ray Choudhury commited on
Commit
1b645bf
0 Parent(s):

feat: first commit

Browse files
Files changed (3) hide show
  1. README.md +144 -0
  2. dataset_infos.json +1 -0
  3. quasar.py +356 -0
README.md ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en-US
8
+ licenses:
9
+ - bsd-3-clause
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ -
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - open-domain-qa
20
+ - extractive-qa
21
+ paperswithcode_id: quasar
22
+ ---
23
+
24
+ # Dataset Card Creation Guide
25
+
26
+ ## Table of Contents
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** N/A
53
+ - **Repository:** [GitHub](https://github.com/mcobzarenco/mctest/)
54
+ - **Paper:** [MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text](https://www.aclweb.org/anthology/D13-1020.pdf)
55
+ - **Leaderboard:** N/A
56
+ - **Point of Contact:** -
57
+
58
+ ### Dataset Summary
59
+
60
+ [More Information Needed]
61
+
62
+ ### Supported Tasks and Leaderboards
63
+
64
+ [More Information Needed]
65
+
66
+ ### Languages
67
+
68
+ [More Information Needed]
69
+
70
+ ## Dataset Structure
71
+
72
+ ### Data Instances
73
+
74
+ [More Information Needed]
75
+
76
+ ### Data Fields
77
+
78
+ [More Information Needed]
79
+
80
+ ### Data Splits
81
+
82
+ [More Information Needed]
83
+ ## Dataset Creation
84
+
85
+ ### Curation Rationale
86
+
87
+ [More Information Needed]
88
+
89
+ ### Source Data
90
+
91
+ [More Information Needed]
92
+
93
+ #### Initial Data Collection and Normalization
94
+
95
+ [More Information Needed]
96
+
97
+ #### Who are the source language producers?
98
+
99
+ [More Information Needed]
100
+
101
+ ### Annotations
102
+
103
+ [More Information Needed]
104
+
105
+ #### Annotation process
106
+
107
+ [More Information Needed]
108
+
109
+ #### Who are the annotators?
110
+
111
+ [More Information Needed]
112
+
113
+ ### Personal and Sensitive Information
114
+
115
+ [More Information Needed]
116
+
117
+ ## Considerations for Using the Data
118
+
119
+ ### Social Impact of Dataset
120
+
121
+ [More Information Needed]
122
+
123
+ ### Discussion of Biases
124
+
125
+ [More Information Needed]
126
+
127
+ ### Other Known Limitations
128
+
129
+ [More Information Needed]
130
+
131
+ ## Additional Information
132
+
133
+ ### Dataset Curators
134
+
135
+ [More Information Needed]
136
+
137
+ ### Licensing Information
138
+
139
+ [More Information Needed]
140
+
141
+ ### Citation Information
142
+
143
+ [More Information Needed]
144
+ ### Contributions
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"quasar-s": {"description": "We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. \n", "citation": "@article{dhingra2017quasar,\n title={Quasar: Datasets for Question Answering by Search and Reading},\n author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},\n journal={arXiv preprint arXiv:1707.03904},\n year={2017}\n}\n", "homepage": "https://github.com/bdhingra/quasar", "license": "", "features": {"uid": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "context_short": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "context_long": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "relation": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "quasar", "config_name": "quasar-s", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1187409339, "num_examples": 31049, "dataset_name": "quasar"}, "validation": {"name": "validation", "num_bytes": 120067134, "num_examples": 3139, "dataset_name": "quasar"}, "test": {"name": "test", "num_bytes": 120290406, "num_examples": 3174, "dataset_name": "quasar"}}, "download_checksums": {"http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/questions/train_questions.json.gz": {"num_bytes": 1957049, "checksum": "bc1540ca81df8bceb89c78c39e4f734bf19d08fc0e6f0893ae8b69fd7816a202"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/long/train_contexts.json.gz": {"num_bytes": 244642137, "checksum": "3a0bb6294ab54bc96ee3097cb98fbbd1b3e0f990c9a7812bd6d48e27416677b7"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/short/train_contexts.json.gz": {"num_bytes": 122615621, "checksum": "ebb1df435d899d560866daac6f9c91715414d2e6db1ea41f780d6b95780b23a9"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/questions/dev_questions.json.gz": {"num_bytes": 195290, "checksum": "ecb287ac7d862af7ced0a9e27320ff12b3a731a2642c18d7d209f0c5cb2d7958"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/long/dev_contexts.json.gz": {"num_bytes": 24782055, "checksum": "6ee2185911add3dbb6b2a3c81bb6d1ddec39d0d5b84607c599ccaacdda427eba"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/short/dev_contexts.json.gz": {"num_bytes": 12425372, "checksum": "17e726d5ec62847d42eb2bf9b28d370f39ba2212d1b40305f549ffa4308052de"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/questions/test_questions.json.gz": {"num_bytes": 192764, "checksum": "eca89e7f63e728d6bdd861f5dc68b5066a648f5a2d84ece730150cd8ceb8a0ca"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/long/test_contexts.json.gz": {"num_bytes": 24707451, "checksum": "2a68d90e137f46d67454fbdc9730b4da70e07c36531a385c37fafbc3753176f2"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/contexts/short/test_contexts.json.gz": {"num_bytes": 12382791, "checksum": "a62e667efebb6e168277e93c26668c75d8b39c787b21f6d1fe1768a40ee274c5"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-s/relation_annotations.json": {"num_bytes": 2833, "checksum": "ee8e90131357e0137425fc10c764476d0c12a4245ef9a8c59f4a9836a7be02aa"}}, "download_size": 443903363, "post_processing_size": null, "dataset_size": 1427766879, "size_in_bytes": 1871670242}, "quasar-t": {"description": "We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. \n", "citation": "@article{dhingra2017quasar,\n title={Quasar: Datasets for Question Answering by Search and Reading},\n author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},\n journal={arXiv preprint arXiv:1707.03904},\n year={2017}\n}\n", "homepage": "https://github.com/bdhingra/quasar", "license": "", "features": {"uid": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "context_short": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "context_long": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}, "tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_type": {"dtype": "string", "id": null, "_type": "Value"}, "genre": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "quasar", "config_name": "quasar-t", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1973207987, "num_examples": 37012, "dataset_name": "quasar"}, "validation": {"name": "validation", "num_bytes": 159766129, "num_examples": 3000, "dataset_name": "quasar"}, "test": {"name": "test", "num_bytes": 160121123, "num_examples": 3000, "dataset_name": "quasar"}}, "download_checksums": {"http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/train_questions.json.gz": {"num_bytes": 1466304, "checksum": "ab3b68e842793dc3ed31839438986d63b7ef20a94bb347b9cd8644a6527d7840"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/train_contexts.json.gz": {"num_bytes": 445078274, "checksum": "aa2b5722f1003736919dca4de64b33f9066154fb357a7010d3a0fb4e11a4d2f2"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/train_contexts.json.gz": {"num_bytes": 172850853, "checksum": "76e9bd6c806c36136c55603e2439834863f0ffcb748ae6fe5aaac75d63c47f1b"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/dev_questions.json.gz": {"num_bytes": 121433, "checksum": "d917cdcfef65b700225c41b863cd96b76f6c569f7a356aed60a54dfb7f515bc0"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/dev_contexts.json.gz": {"num_bytes": 36267682, "checksum": "56923cf7738e5b12e859eefbf07a29a1bb4cd6ddd0361c34b635f6f3550a825c"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/dev_contexts.json.gz": {"num_bytes": 13976824, "checksum": "22bf4221715c9aea49506cb578121ac26eff5c896d6199dfa2173be615241486"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/test_questions.json.gz": {"num_bytes": 121488, "checksum": "0bb161f31aaac93058f0079f87d3be5ec928a74c741bd9d81e1005a3cdf9bd5a"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/test_contexts.json.gz": {"num_bytes": 35996879, "checksum": "039d8f57867820b4659b75d03cfd182f2d8179acee5a0b273d8a8e03dcaeadd3"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/test_contexts.json.gz": {"num_bytes": 14023655, "checksum": "cb967682e2ab06cad41f7a66c0e737a2e3f825b3684e30137848d7020737635a"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/answer_annotations.json": {"num_bytes": 1502, "checksum": "d410026ffe62557d289d7bd2f230c3af7695d6081cee9329db3de9143ad4ac26"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/genre_annotations.json": {"num_bytes": 2635, "checksum": "cc02cb14c8c9ccf8d07c94e2e845612a700277f4b0da578fee8fbc50642ff4f2"}}, "download_size": 719907529, "post_processing_size": null, "dataset_size": 2293095239, "size_in_bytes": 3013002768}, "quasar-t-nps": {"description": "We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query. \n", "citation": "@article{dhingra2017quasar,\n title={Quasar: Datasets for Question Answering by Search and Reading},\n author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},\n journal={arXiv preprint arXiv:1707.03904},\n year={2017}\n}\n", "homepage": "https://github.com/bdhingra/quasar", "license": "", "features": {"uid": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "context_short": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}, "content_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "nps": {"feature": {"content": {"dtype": "string", "id": null, "_type": "Value"}, "start_token_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "context_long": {"feature": {"confidence": {"dtype": "float32", "id": null, "_type": "Value"}, "content": {"dtype": "string", "id": null, "_type": "Value"}, "content_tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "nps": {"feature": {"content": {"dtype": "string", "id": null, "_type": "Value"}, "start_token_id": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "tags": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "answer": {"dtype": "string", "id": null, "_type": "Value"}, "answer_type": {"dtype": "string", "id": null, "_type": "Value"}, "genre": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "quasar", "config_name": "quasar-t-nps", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 6428881377, "num_examples": 37012, "dataset_name": "quasar"}, "validation": {"name": "validation", "num_bytes": 520694542, "num_examples": 3000, "dataset_name": "quasar"}, "test": {"name": "test", "num_bytes": 521524682, "num_examples": 3000, "dataset_name": "quasar"}}, "download_checksums": {"http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/train_questions.json.gz": {"num_bytes": 1466304, "checksum": "ab3b68e842793dc3ed31839438986d63b7ef20a94bb347b9cd8644a6527d7840"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/train_contexts.json.gz": {"num_bytes": 445078274, "checksum": "aa2b5722f1003736919dca4de64b33f9066154fb357a7010d3a0fb4e11a4d2f2"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/train_contexts.json.gz": {"num_bytes": 172850853, "checksum": "76e9bd6c806c36136c55603e2439834863f0ffcb748ae6fe5aaac75d63c47f1b"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/dev_questions.json.gz": {"num_bytes": 121433, "checksum": "d917cdcfef65b700225c41b863cd96b76f6c569f7a356aed60a54dfb7f515bc0"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/dev_contexts.json.gz": {"num_bytes": 36267682, "checksum": "56923cf7738e5b12e859eefbf07a29a1bb4cd6ddd0361c34b635f6f3550a825c"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/dev_contexts.json.gz": {"num_bytes": 13976824, "checksum": "22bf4221715c9aea49506cb578121ac26eff5c896d6199dfa2173be615241486"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/questions/test_questions.json.gz": {"num_bytes": 121488, "checksum": "0bb161f31aaac93058f0079f87d3be5ec928a74c741bd9d81e1005a3cdf9bd5a"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/test_contexts.json.gz": {"num_bytes": 35996879, "checksum": "039d8f57867820b4659b75d03cfd182f2d8179acee5a0b273d8a8e03dcaeadd3"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/test_contexts.json.gz": {"num_bytes": 14023655, "checksum": "cb967682e2ab06cad41f7a66c0e737a2e3f825b3684e30137848d7020737635a"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/answer_annotations.json": {"num_bytes": 1502, "checksum": "d410026ffe62557d289d7bd2f230c3af7695d6081cee9329db3de9143ad4ac26"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/genre_annotations.json": {"num_bytes": 2635, "checksum": "cc02cb14c8c9ccf8d07c94e2e845612a700277f4b0da578fee8fbc50642ff4f2"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/train_nps.json.gz": {"num_bytes": 377526504, "checksum": "99464c6edab03208dbb3b482ac499e9a49df9a69421eb9411cd73dbbae98ec53"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/train_nps.json.gz": {"num_bytes": 110182511, "checksum": "a69dbc82d1da05e7a9c0991623b3d3cc211421b8b85260ef9055d4b52adf1feb"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/dev_nps.json.gz": {"num_bytes": 30690232, "checksum": "50896eea80865d554f15c9b6b44d5a93903e06b060510dbedb9229567bae5477"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/dev_nps.json.gz": {"num_bytes": 8951977, "checksum": "67b99fd89cb6fe7aed9c9666a08b4e84538fd4a5a563c99e1db6d980fdcd1424"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/long/test_nps.json.gz": {"num_bytes": 30594385, "checksum": "0dc158ec67ff5e4ee32961eee6235b6bd90e399e28c130cd96bb17ef17f4f90b"}, "http://curtis.ml.cmu.edu/datasets/quasar/quasar-t/contexts/short/test_nps.json.gz": {"num_bytes": 8920932, "checksum": "3ce4a4fbb9de4f67c8728d5c96c7c077bec6811d508ea6a57862867ed71c7116"}}, "download_size": 1286774070, "post_processing_size": null, "dataset_size": 7471100601, "size_in_bytes": 8757874671}}
quasar.py ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Quasar: Datasets for Question Answering by Search and Reading"""
16
+
17
+
18
+ import gzip
19
+ import datasets
20
+ import json
21
+ from collections import defaultdict
22
+ from tqdm import tqdm
23
+
24
+ _CITATION = """\
25
+ @article{dhingra2017quasar,
26
+ title={Quasar: Datasets for Question Answering by Search and Reading},
27
+ author={Dhingra, Bhuwan and Mazaitis, Kathryn and Cohen, William W},
28
+ journal={arXiv preprint arXiv:1707.03904},
29
+ year={2017}
30
+ }
31
+ """
32
+ _UNKNOWN_RELATION = "UNK_RELATION"
33
+ _UNKNOWN_ANS_TYPE = "UNK_ANS_TYPE"
34
+ _UNKNOWN_GENRE = "UNK_GENRE"
35
+ _QUASAR_S = "quasar-s"
36
+ _QUASAR_T = "quasar-t"
37
+ _QUASAR_T_NPS = "quasar-t-nps"
38
+ _WHITE_SPACE = " "
39
+ _DESCRIPTION = """\
40
+ We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Quasar-S dataset consists of 37000 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The Quasar-T dataset consists of 43000 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. We pose these datasets as a challenge for two related subtasks of factoid Question Answering: (1) searching for relevant pieces of text that include the correct answer to a query, and (2) reading the retrieved text to answer the query.
41
+ """
42
+
43
+ _HOMEPAGE = "https://github.com/bdhingra/quasar"
44
+
45
+ _DATA_URL = "http://curtis.ml.cmu.edu/datasets/quasar"
46
+
47
+ QUASAR_S_DESC = """\
48
+ Quasar-S consists of cloze style questions over software entities. The following information is provided.
49
+ uid: Unique id
50
+ question: Text of the question
51
+ answer: Text of the answer
52
+ context_short: List[{confidence: float, content: str}]
53
+ context_long: The same as context_short, but from a different data source. see the paper for more info.
54
+ relation: For some questions in Quasar-S, the relation type between head entity of the cloze question and the answer
55
+ entity is provided. For the other questions, this field takes the value "UNK_RELATION". For example,
56
+ [question]: jarjar -- jar jar links http : code.google.com p @placeholder is a utility that
57
+ makes it easy to repackage java libraries and embed them into your own distribution .,
58
+ [answer]: jarjar,
59
+ [relationship]: synonym
60
+ """
61
+
62
+ QUASAR_T_DESC = """\
63
+ The following information is provided.
64
+ uid: unique id
65
+ question: text of the question
66
+ answer: text of the answer
67
+ context_short: List[{confidence: float, content: str}]
68
+ context_long: The same as context_short, but from a different data source. see the paper for more info.
69
+ answer_type: Whether the answer is a date/time or number. This is known for some answers, for the others, this field
70
+ takes the value "UNK_ANS_TYPE"
71
+ genre: Whether the question is from the genre of arts or math/science. This is known for some questions, for the others,
72
+ this field takes the value "UNK_GENRE"
73
+ """
74
+
75
+ QUASAR_T_NPS_DESC = """\
76
+ Quasar-T consists of consists of trivia questions. The following information is provided.
77
+ uid: unique id
78
+ question: text of the question
79
+ answer: text of the answer
80
+ context_short:
81
+ List[
82
+ {
83
+ confidence: float,
84
+ content: str,
85
+ content_tokens: List[str],
86
+ nps: List[{'content': str, 'start_token_id': int}]
87
+ }
88
+ ]
89
+ Here, context_tokens is a whitespace tokenization of content. `nps` are contiguous chunks of NN* tagged tokens from the
90
+ context as candidate answers.
91
+ context_long: The same as context_short, but from a different data source. see the paper for more info.
92
+ answer_type: Whether the answer is a date/time or number. This is known for some answers, for the others, this field
93
+ takes the value "UNK_ANS_TYPE"
94
+ genre: Whether the question is from the genre of arts or math/science. This is known for some questions, for the others,
95
+ this field takes the value "UNK_GENRE"
96
+ """
97
+
98
+
99
+ class Quasar(datasets.GeneratorBasedBuilder):
100
+ """MCTest: Machine comprehension test: http://research.microsoft.com/mct"""
101
+
102
+ VERSION = datasets.Version("1.0.0")
103
+
104
+ BUILDER_CONFIGS = [
105
+ datasets.BuilderConfig(
106
+ name=_QUASAR_S,
107
+ version=VERSION,
108
+ description=QUASAR_S_DESC,
109
+ ),
110
+ datasets.BuilderConfig(
111
+ name=_QUASAR_T,
112
+ version=VERSION,
113
+ description=QUASAR_T_DESC,
114
+ ),
115
+ datasets.BuilderConfig(
116
+ name=_QUASAR_T_NPS,
117
+ version=VERSION,
118
+ description=QUASAR_T_NPS_DESC,
119
+ )
120
+ ]
121
+
122
+ DEFAULT_CONFIG_NAME = _QUASAR_S
123
+
124
+ def _info(self):
125
+ features = datasets.Features(
126
+ {
127
+ "uid": datasets.Value("string"),
128
+ "question": datasets.Value("string"),
129
+ "context_short": datasets.Sequence(
130
+ dict(
131
+ {
132
+ "confidence": datasets.Value("float"),
133
+ "content": datasets.Value("string")
134
+ }
135
+ )),
136
+ "context_long": datasets.Sequence(
137
+ dict(
138
+ {
139
+ "confidence": datasets.Value("float"),
140
+ "content": datasets.Value("string")
141
+ }
142
+ )),
143
+ "tags": datasets.Sequence(datasets.Value("string")),
144
+ "answer": datasets.Value("string"),
145
+ }
146
+ )
147
+ # for some questions in Quasar-S, relation type between head entity of the cloze question and the answer entity
148
+ # is provided. For the other questions, we provide an UNK
149
+
150
+ # [relationship]: component-of, [question]: putchar -- anything related to c or @placeholder functions putchar
151
+ # c or std : : putchar c++ ., [answer]: c++-standard-library
152
+
153
+ # [relationship]: synonym, [question]: jarjar -- jar jar links http : code.google.com p @placeholder is a
154
+ # utility that makes it easy to repackage java libraries and embed them into your own distribution .,
155
+ # [answer]: jarjar
156
+
157
+ # [relationship]: runs-on, [question]: web-audio -- web-audio is a javascript api providing low-level
158
+ # low-latency audio playback and manipulation functions in html5 capable @placeholder browsers ., [answer]: web
159
+
160
+ # [relationship]: used-with, [question]: audio-video-sync -- questions related to synchronization between audio
161
+ # and @placeholder during creation transmission reception and playback of content with both audio and video .,
162
+ # [answer]: video
163
+
164
+ if self.config.name == _QUASAR_S:
165
+ features.update({
166
+ "relation": datasets.Value("string")
167
+ })
168
+ elif self.config.name.startswith(_QUASAR_T):
169
+ features.update({
170
+ "answer_type": datasets.Value("string"),
171
+ "genre": datasets.Value("string")
172
+ })
173
+ # (only for quasar-T): We also provide contiguous chunks of
174
+ # NN* tagged tokens from the context as candidate answers (only for quasar-T).
175
+ # Again each line corresponds to the question in <split>_questions.json.gz,
176
+ # in the format:
177
+ # {
178
+ # "nps": [
179
+ # ...
180
+ # [
181
+ # "aerosol spray",
182
+ # 69,
183
+ # 29
184
+ # ],
185
+ # ],
186
+ # "uid": "s3q41931"
187
+ # }
188
+ #
189
+ # Each element in "nps" is a list with three elements -
190
+ # [candidate, context_id, token_id]. The context_id is the index into the
191
+ # list of context documents, and token_id is the position of the start of
192
+ # the np in the context, when tokenized by white-space. Both are 0-based
193
+ # indices.
194
+ #
195
+ # If the correct answer is not detected as an NN* chunk we add it to the
196
+ # list of NPs above. The context_id and token_id are set to -1 in this
197
+ # case.
198
+
199
+ # since this will increase the size by quite a bit, we use a separate configuration for this, called
200
+ # quasar-t-nps
201
+ if self.config.name == _QUASAR_T_NPS:
202
+ for _type in ["short", "long"]:
203
+ features[f"context_{_type}"] = datasets.Sequence(
204
+ dict(
205
+ {
206
+ "confidence": datasets.Value("float"),
207
+ "content": datasets.Value("string"),
208
+ "content_tokens": datasets.Sequence(datasets.Value("string")),
209
+ "nps": datasets.Sequence(dict(
210
+ {
211
+ "content": datasets.Value("string"),
212
+ "start_token_id": datasets.Value("int32")
213
+ }
214
+ ))
215
+ }
216
+ )
217
+ )
218
+ return datasets.DatasetInfo(
219
+ description=_DESCRIPTION,
220
+ features=features,
221
+ homepage=_HOMEPAGE,
222
+ citation=_CITATION,
223
+ )
224
+
225
+ def _split_generators(self, dl_manager):
226
+ paths = {}
227
+ phases = ["train", "dev", "test"]
228
+ if self.config.name == _QUASAR_S:
229
+ data_path = f"{_DATA_URL}/{_QUASAR_S}"
230
+ for phase in phases:
231
+ paths[phase] = {
232
+ "qa": dl_manager.download(f"{data_path}/questions/{phase}_questions.json.gz"),
233
+ "contexts_long": dl_manager.download(f"{data_path}/contexts/long/{phase}_contexts.json.gz"),
234
+ "contexts_short": dl_manager.download(f"{data_path}/contexts/short/{phase}_contexts.json.gz"),
235
+ }
236
+ paths["relations"] = dl_manager.download(f"{data_path}/relation_annotations.json")
237
+ elif self.config.name.startswith(_QUASAR_T):
238
+ data_path = f"{_DATA_URL}/{_QUASAR_T}"
239
+ for phase in phases:
240
+ paths[phase] = {
241
+ "qa": dl_manager.download(f"{data_path}/questions/{phase}_questions.json.gz"),
242
+ "contexts_long": dl_manager.download(f"{data_path}/contexts/long/{phase}_contexts.json.gz"),
243
+ "contexts_short": dl_manager.download(f"{data_path}/contexts/short/{phase}_contexts.json.gz"),
244
+ }
245
+ paths["answer_types"] = dl_manager.download(f"{data_path}/answer_annotations.json")
246
+ paths["genres"] = dl_manager.download(f"{data_path}/genre_annotations.json")
247
+ if self.config.name == _QUASAR_T_NPS:
248
+ for phase in phases:
249
+ paths[phase].update(
250
+ {
251
+ "nps_long": dl_manager.download(f"{data_path}/contexts/long/{phase}_nps.json.gz"),
252
+ "nps_short": dl_manager.download(f"{data_path}/contexts/short/{phase}_nps.json.gz"),
253
+ }
254
+ )
255
+ return [
256
+ datasets.SplitGenerator(
257
+ name=datasets.Split.TRAIN,
258
+ gen_kwargs={"filepath": paths, "phase": "train"},
259
+ ),
260
+ datasets.SplitGenerator(
261
+ name=datasets.Split.VALIDATION,
262
+ gen_kwargs={"filepath": paths, "phase": "dev"},
263
+ ),
264
+ datasets.SplitGenerator(
265
+ name=datasets.Split.TEST,
266
+ gen_kwargs={"filepath": paths, "phase": "test"},
267
+ ),
268
+ ]
269
+
270
+ @staticmethod
271
+ def _read_file(path):
272
+ """
273
+ read a json.gz file
274
+ :param path:
275
+ :return:
276
+ """
277
+ with gzip.open(path) as rf:
278
+ for line in rf:
279
+ yield eval(line)
280
+
281
+ @staticmethod
282
+ def _invert_dict(_dict):
283
+ """
284
+ converts a dict of Dict[str, List[str]] to Dict[str, str], where each key in the new dict is one of the
285
+ values in the original dict
286
+ :param _dict:
287
+ :return:
288
+ """
289
+ _d = {}
290
+ for k, v in _dict.items():
291
+ for _v in v:
292
+ _d[_v] = k
293
+ return _d
294
+
295
+ @staticmethod
296
+ def _get_nps(nps, context_sentences):
297
+ np_sentence_dict = defaultdict(list)
298
+ for candidate, context_id, token_id in nps:
299
+ np_sentence_dict[context_id].append((candidate, token_id))
300
+ _context_sentences = [{
301
+ "confidence": context_sentence["confidence"],
302
+ "content": context_sentence["content"],
303
+ "content_tokens": context_sentence["content"].split(_WHITE_SPACE),
304
+ "nps": [{"content": np[0], "start_token_id": np[1]} for np in np_sentence_dict[index]]
305
+ } for index, context_sentence in enumerate(context_sentences)]
306
+ return _context_sentences
307
+
308
+ @staticmethod
309
+ def _get_base_datum(qa, context_long, context_short):
310
+ uid = qa["uid"]
311
+ assert context_long["uid"] == uid
312
+ assert context_short["uid"] == uid
313
+ context_long = [{"confidence": context[0], "content": context[1]} for context in context_long["contexts"]]
314
+ context_short = [{"confidence": context[0], "content": context[1]} for context in context_short["contexts"]]
315
+ return {
316
+ "uid": qa["uid"],
317
+ "question": qa["question"],
318
+ "context_short": context_short,
319
+ "context_long": context_long,
320
+ "tags": qa["tags"],
321
+ "answer": qa["answer"]
322
+ }
323
+
324
+ def _generate_examples(self, filepath, phase):
325
+ qas = self._read_file(filepath[phase]["qa"])
326
+ contexts_long = self._read_file(filepath[phase]["contexts_long"])
327
+ contexts_short = self._read_file(filepath[phase]["contexts_short"])
328
+ if self.config.name == _QUASAR_S:
329
+ relations = self._invert_dict(json.load(open(filepath["relations"])))
330
+ for qa, context_long, context_short in zip(qas, contexts_long, contexts_short):
331
+ datum = self._get_base_datum(qa, context_long, context_short)
332
+ datum.update({"relation": relations.get(qa["uid"], _UNKNOWN_RELATION)})
333
+ yield qa["uid"], datum
334
+ elif self.config.name == _QUASAR_T:
335
+ answer_types = self._invert_dict(json.load(open(filepath["answer_types"])))
336
+ genres = self._invert_dict(json.load(open(filepath["genres"])))
337
+ for qa, context_long, context_short in zip(qas, contexts_long, contexts_short):
338
+ datum = self._get_base_datum(qa, context_long, context_short)
339
+ datum.update({"answer_type": answer_types.get(qa["uid"], _UNKNOWN_ANS_TYPE)})
340
+ datum.update({"genre": genres.get(qa["uid"], _UNKNOWN_GENRE)})
341
+ yield qa["uid"], datum
342
+ elif self.config.name == _QUASAR_T_NPS:
343
+ answer_types = self._invert_dict(json.load(open(filepath["answer_types"])))
344
+ genres = self._invert_dict(json.load(open(filepath["genres"])))
345
+ nps_long = self._read_file(filepath[phase]["nps_long"])
346
+ nps_short = self._read_file(filepath[phase]["nps_short"])
347
+ for qa, context_long, context_short, np_long, np_short in zip(qas, contexts_long, contexts_short, nps_long,
348
+ nps_short):
349
+ datum = self._get_base_datum(qa, context_long, context_short)
350
+ assert np_long["uid"] == qa["uid"]
351
+ assert np_short["uid"] == qa["uid"]
352
+ datum.update({"answer_type": answer_types.get(qa["uid"], _UNKNOWN_ANS_TYPE)})
353
+ datum.update({"genre": genres.get(qa["uid"], _UNKNOWN_GENRE)})
354
+ datum["context_long"] = self._get_nps(np_long["nps"], datum["context_long"])
355
+ datum["context_short"] = self._get_nps(np_short["nps"], datum["context_short"])
356
+ yield qa["uid"], datum