Datasets:

Sub-tasks:
extractive-qa
Languages:
English
Multilinguality:
monolingual
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
License:
albertvillanova HF staff commited on
Commit
21cc522
1 Parent(s): 8dda700

Fix bugs in NewsQA dataset (#3734)

Browse files

* Fix parsing of validated_answers

* Parse also badQuestion field

* Fix Features inside Features

* Refactor script

* Update metadata JSON

* Update dataset card

Commit from https://github.com/huggingface/datasets/commit/6103a627e57e13f4b7ec16fedacbcf7066b31dee

Files changed (3) hide show
  1. README.md +127 -24
  2. dataset_infos.json +1 -1
  3. newsqa.py +65 -97
README.md CHANGED
@@ -78,53 +78,156 @@ English
78
  ### Data Instances
79
 
80
  ```
81
- {'questions': {'answers': [{'sourcerAnswers': [{'e': [297], 'noAnswer': [False], 's': [294]}, {'e': [0], 'noAnswer': [True], 's': [0]}, {'e': [0], 'noAnswer': [True], 's': [0]}]}, {'sourcerAnswers': [{'e': [271], 'noAnswer': [False], 's': [261]}, {'e': [271], 'noAnswer': [False], 's': [258]}, {'e': [271], 'noAnswer': [False], 's': [261]}]}, {'sourcerAnswers': [{'e': [33], 'noAnswer': [False], 's': [26]}, {'e': [0], 'noAnswer': [True], 's': [0]}, {'e': [640], 'noAnswer': [False], 's': [624]}]}, {'sourcerAnswers': [{'e': [218], 'noAnswer': [False], 's': [195]}, {'e': [218], 'noAnswer': [False], 's': [195]}]}, {'sourcerAnswers': [{'e': [0], 'noAnswer': [True], 's': [0]}, {'e': [218, 271], 'noAnswer': [False, False], 's': [195, 232]}, {'e': [0], 'noAnswer': [True], 's': [0]}]}, {'sourcerAnswers': [{'e': [192], 'noAnswer': [False], 's': [129]}, {'e': [151], 'noAnswer': [False], 's': [129]}, {'e': [151], 'noAnswer': [False], 's': [133]}]}, {'sourcerAnswers': [{'e': [218], 'noAnswer': [False], 's': [195]}, {'e': [218], 'noAnswer': [False], 's': [195]}]}, {'sourcerAnswers': [{'e': [297], 'noAnswer': [False], 's': [294]}, {'e': [297], 'noAnswer': [False], 's': [294]}]}, {'sourcerAnswers': [{'e': [297], 'noAnswer': [False], 's': [294]}, {'e': [297], 'noAnswer': [False], 's': [294]}]}], 'consensus': [{'badQuestion': False, 'e': 297, 'noAnswer': False, 's': 294}, {'badQuestion': False, 'e': 271, 'noAnswer': False, 's': 261}, {'badQuestion': False, 'e': 640, 'noAnswer': False, 's': 624}, {'badQuestion': False, 'e': 218, 'noAnswer': False, 's': 195}, {'badQuestion': False, 'e': 218, 'noAnswer': False, 's': 195}, {'badQuestion': False, 'e': 151, 'noAnswer': False, 's': 129}, {'badQuestion': False, 'e': 218, 'noAnswer': False, 's': 195}, {'badQuestion': False, 'e': 297, 'noAnswer': False, 's': 294}, {'badQuestion': False, 'e': 297, 'noAnswer': False, 's': 294}], 'isAnswerAbsent': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'isQuestionBad': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'q': ['What was the amount of children murdered?', 'When was Pandher sentenced to death?', 'The court aquitted Moninder Singh Pandher of what crime?', 'who was acquitted', 'who was sentenced', 'What was Moninder Singh Pandher acquitted for?', 'Who was sentenced to death in February?', 'how many people died', 'How many children and young women were murdered?'], 'validated_answers': [{'sourcerAnswers': [{'count': [0], 'e': [297], 'noAnswer': [False], 's': [294]}, {'count': [0], 'e': [0], 'noAnswer': [True], 's': [0]}, {'count': [0], 'e': [0], 'noAnswer': [True], 's': [0]}]}, {'sourcerAnswers': [{'count': [0], 'e': [271], 'noAnswer': [False], 's': [261]}, {'count': [0], 'e': [271], 'noAnswer': [False], 's': [258]}, {'count': [0], 'e': [271], 'noAnswer': [False], 's': [261]}]}, {'sourcerAnswers': [{'count': [0], 'e': [33], 'noAnswer': [False], 's': [26]}, {'count': [0], 'e': [0], 'noAnswer': [True], 's': [0]}, {'count': [0], 'e': [640], 'noAnswer': [False], 's': [624]}]}, {'sourcerAnswers': [{'count': [0], 'e': [218], 'noAnswer': [False], 's': [195]}, {'count': [0], 'e': [218], 'noAnswer': [False], 's': [195]}]}, {'sourcerAnswers': [{'count': [0], 'e': [0], 'noAnswer': [True], 's': [0]}, {'count': [0, 0], 'e': [218, 271], 'noAnswer': [False, False], 's': [195, 232]}, {'count': [0], 'e': [0], 'noAnswer': [True], 's': [0]}]}, {'sourcerAnswers': [{'count': [0], 'e': [192], 'noAnswer': [False], 's': [129]}, {'count': [0], 'e': [151], 'noAnswer': [False], 's': [129]}, {'count': [0], 'e': [151], 'noAnswer': [False], 's': [133]}]}, {'sourcerAnswers': [{'count': [0], 'e': [218], 'noAnswer': [False], 's': [195]}, {'count': [0], 'e': [218], 'noAnswer': [False], 's': [195]}]}, {'sourcerAnswers': [{'count': [0], 'e': [297], 'noAnswer': [False], 's': [294]}, {'count': [0], 'e': [297], 'noAnswer': [False], 's': [294]}]}, {'sourcerAnswers': [{'count': [0], 'e': [297], 'noAnswer': [False], 's': [294]}, {'count': [0], 'e': [297], 'noAnswer': [False], 's': [294]}]}]}, 'storyId': './cnn/stories/42d01e187213e86f5fe617fe32e716ff7fa3afc4.story', 'text': 'NEW DELHI, India (CNN) -- A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed "the house of horrors."\n\n\n\nMoninder Singh Pandher was sentenced to death by a lower court in February.\n\n\n\nThe teen was one of 19 victims -- children and young women -- in one of the most gruesome serial killings in India in recent years.\n\n\n\nThe Allahabad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN.\n\n\n\nPandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old.\n\n\n\nThe high court upheld Koli\'s death sentence, Kochar said.\n\n\n\nThe two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a "house of horrors" by the Indian media.\n\n\n\nPandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said.\n\n\n\nKochar said his client was in Australia when the teen was raped and killed.\n\n\n\nPandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.', 'type': 'train'}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
  ```
83
 
84
  ### Data Fields
85
 
86
-
87
  Configuration: combined-csv
88
- - 'story_id': An identifier of the story
89
- - 'story_text': text of the story
90
  - 'question': A question about the story.
91
- - 'answer_char_ranges': The raw data collected for character based indices to answers in story_text. E.g. 196:228|196:202,217:228|None. Answers from different crowdsourcers are separated by |, within those, multiple selections from the same crowdsourcer are separated by ,. None means the crowdsourcer thought there was no answer to the question in the story. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
92
 
93
- Configuration: combined-csv
94
  - 'storyId': An identifier of the story.
95
- - 'text': Text of the story
96
- - 'type': Split type - train, validation or test
97
- - 'questions': A list containing the following.
98
- - 'q': A question
99
  - 'isAnswerAbsent': Proportion of crowdsourcers that said there was no answer to the question in the story.
100
  - 'isQuestionBad': Proportion of crowdsourcers that said the question does not make sense.
101
- - 'consensus': The consensus answer. Use this field to pick the best continuous answer span from the text. If you want to know about a question having multiple answers in the text then you can use the more detailed "answers" and "validatedAnswers". The object can have start and end positions like in the example above or can be {"badQuestion": true} or {"noAnswer": true}. Note that there is only one consensus answer since it's based on the majority agreement of the crowdsourcers.
102
- - 's': start of the answer
103
- - 'e': end of the answer
104
  - 'badQuestion': The validator said that the question did not make sense.
105
  - 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
106
  - 'answers': The answers from various crowdsourcers.
107
  - 'sourcerAnswers': The answer provided from one crowdsourcer.
108
- - 's': start
109
- - 'e': end
 
110
  - 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
111
  - 'validated_answers': The answers from the validators.
112
- - 'sourcerAnswers': The answer provided from one crowdsourcer.
113
- - 's': start
114
- - 'e': end
115
- - 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
116
- - 'count': The number of validators that agreed with this answer.
117
 
118
  Configuration: split
119
- - 'story_id': An identifier of the story
120
  - 'story_text': text of the story
121
  - 'question': A question about the story.
122
- - 'answer_token_ranges': Word based indices to answers in story_text. E.g. 196:202,217:228. Multiple selections from the same answer are separated by ,. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
123
 
124
  ### Data Splits
125
 
126
- split: Train, Validation and Test.
127
- combined-csv and combined-json: train (whole dataset)
 
 
 
128
 
129
  ## Dataset Creation
130
 
78
  ### Data Instances
79
 
80
  ```
81
+ {'storyId': './cnn/stories/42d01e187213e86f5fe617fe32e716ff7fa3afc4.story',
82
+ 'text': 'NEW DELHI, India (CNN) -- A high court in northern India on Friday acquitted a wealthy businessman facing the death sentence for the killing of a teen in a case dubbed "the house of horrors."\n\n\n\nMoninder Singh Pandher was sentenced to death by a lower court in February.\n\n\n\nThe teen was one of 19 victims -- children and young women -- in one of the most gruesome serial killings in India in recent years.\n\n\n\nThe Allahabad high court has acquitted Moninder Singh Pandher, his lawyer Sikandar B. Kochar told CNN.\n\n\n\nPandher and his domestic employee Surinder Koli were sentenced to death in February by a lower court for the rape and murder of the 14-year-old.\n\n\n\nThe high court upheld Koli\'s death sentence, Kochar said.\n\n\n\nThe two were arrested two years ago after body parts packed in plastic bags were found near their home in Noida, a New Delhi suburb. Their home was later dubbed a "house of horrors" by the Indian media.\n\n\n\nPandher was not named a main suspect by investigators initially, but was summoned as co-accused during the trial, Kochar said.\n\n\n\nKochar said his client was in Australia when the teen was raped and killed.\n\n\n\nPandher faces trial in the remaining 18 killings and could remain in custody, the attorney said.',
83
+ 'type': 'train',
84
+ 'questions': {'q': ['What was the amount of children murdered?',
85
+ 'When was Pandher sentenced to death?',
86
+ 'The court aquitted Moninder Singh Pandher of what crime?',
87
+ 'who was acquitted',
88
+ 'who was sentenced',
89
+ 'What was Moninder Singh Pandher acquitted for?',
90
+ 'Who was sentenced to death in February?',
91
+ 'how many people died',
92
+ 'How many children and young women were murdered?'],
93
+ 'isAnswerAbsent': [0, 0, 0, 0, 0, 0, 0, 0, 0],
94
+ 'isQuestionBad': [0, 0, 0, 0, 0, 0, 0, 0, 0],
95
+ 'consensus': [{'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
96
+ {'s': 261, 'e': 271, 'badQuestion': False, 'noAnswer': False},
97
+ {'s': 624, 'e': 640, 'badQuestion': False, 'noAnswer': False},
98
+ {'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
99
+ {'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
100
+ {'s': 129, 'e': 151, 'badQuestion': False, 'noAnswer': False},
101
+ {'s': 195, 'e': 218, 'badQuestion': False, 'noAnswer': False},
102
+ {'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False},
103
+ {'s': 294, 'e': 297, 'badQuestion': False, 'noAnswer': False}],
104
+ 'answers': [{'sourcerAnswers': [{'s': [294],
105
+ 'e': [297],
106
+ 'badQuestion': [False],
107
+ 'noAnswer': [False]},
108
+ {'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
109
+ {'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
110
+ {'sourcerAnswers': [{'s': [261],
111
+ 'e': [271],
112
+ 'badQuestion': [False],
113
+ 'noAnswer': [False]},
114
+ {'s': [258], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]},
115
+ {'s': [261], 'e': [271], 'badQuestion': [False], 'noAnswer': [False]}]},
116
+ {'sourcerAnswers': [{'s': [26],
117
+ 'e': [33],
118
+ 'badQuestion': [False],
119
+ 'noAnswer': [False]},
120
+ {'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]},
121
+ {'s': [624], 'e': [640], 'badQuestion': [False], 'noAnswer': [False]}]},
122
+ {'sourcerAnswers': [{'s': [195],
123
+ 'e': [218],
124
+ 'badQuestion': [False],
125
+ 'noAnswer': [False]},
126
+ {'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
127
+ {'sourcerAnswers': [{'s': [0],
128
+ 'e': [0],
129
+ 'badQuestion': [False],
130
+ 'noAnswer': [True]},
131
+ {'s': [195, 232],
132
+ 'e': [218, 271],
133
+ 'badQuestion': [False, False],
134
+ 'noAnswer': [False, False]},
135
+ {'s': [0], 'e': [0], 'badQuestion': [False], 'noAnswer': [True]}]},
136
+ {'sourcerAnswers': [{'s': [129],
137
+ 'e': [192],
138
+ 'badQuestion': [False],
139
+ 'noAnswer': [False]},
140
+ {'s': [129], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]},
141
+ {'s': [133], 'e': [151], 'badQuestion': [False], 'noAnswer': [False]}]},
142
+ {'sourcerAnswers': [{'s': [195],
143
+ 'e': [218],
144
+ 'badQuestion': [False],
145
+ 'noAnswer': [False]},
146
+ {'s': [195], 'e': [218], 'badQuestion': [False], 'noAnswer': [False]}]},
147
+ {'sourcerAnswers': [{'s': [294],
148
+ 'e': [297],
149
+ 'badQuestion': [False],
150
+ 'noAnswer': [False]},
151
+ {'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]},
152
+ {'sourcerAnswers': [{'s': [294],
153
+ 'e': [297],
154
+ 'badQuestion': [False],
155
+ 'noAnswer': [False]},
156
+ {'s': [294], 'e': [297], 'badQuestion': [False], 'noAnswer': [False]}]}],
157
+ 'validated_answers': [{'s': [0, 294],
158
+ 'e': [0, 297],
159
+ 'badQuestion': [False, False],
160
+ 'noAnswer': [True, False],
161
+ 'count': [1, 2]},
162
+ {'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
163
+ {'s': [624],
164
+ 'e': [640],
165
+ 'badQuestion': [False],
166
+ 'noAnswer': [False],
167
+ 'count': [2]},
168
+ {'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
169
+ {'s': [195],
170
+ 'e': [218],
171
+ 'badQuestion': [False],
172
+ 'noAnswer': [False],
173
+ 'count': [2]},
174
+ {'s': [129],
175
+ 'e': [151],
176
+ 'badQuestion': [False],
177
+ 'noAnswer': [False],
178
+ 'count': [2]},
179
+ {'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
180
+ {'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []},
181
+ {'s': [], 'e': [], 'badQuestion': [], 'noAnswer': [], 'count': []}]}}
182
  ```
183
 
184
  ### Data Fields
185
 
 
186
  Configuration: combined-csv
187
+ - 'story_id': An identifier of the story.
188
+ - 'story_text': Text of the story.
189
  - 'question': A question about the story.
190
+ - 'answer_char_ranges': The raw data collected for character based indices to answers in story_text. E.g. 196:228|196:202,217:228|None. Answers from different crowdsourcers are separated by `|`; within those, multiple selections from the same crowdsourcer are separated by `,`. `None` means the crowdsourcer thought there was no answer to the question in the story. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
191
 
192
+ Configuration: combined-json
193
  - 'storyId': An identifier of the story.
194
+ - 'text': Text of the story.
195
+ - 'type': Split type. Will be "train", "validation" or "test".
196
+ - 'questions': A list containing the following:
197
+ - 'q': A question about the story.
198
  - 'isAnswerAbsent': Proportion of crowdsourcers that said there was no answer to the question in the story.
199
  - 'isQuestionBad': Proportion of crowdsourcers that said the question does not make sense.
200
+ - 'consensus': The consensus answer. Use this field to pick the best continuous answer span from the text. If you want to know about a question having multiple answers in the text then you can use the more detailed "answers" and "validated_answers". The object can have start and end positions like in the example above or can be {"badQuestion": true} or {"noAnswer": true}. Note that there is only one consensus answer since it's based on the majority agreement of the crowdsourcers.
201
+ - 's': Start of the answer. The first character of the answer in "text" (inclusive).
202
+ - 'e': End of the answer. The last character of the answer in "text" (exclusive).
203
  - 'badQuestion': The validator said that the question did not make sense.
204
  - 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
205
  - 'answers': The answers from various crowdsourcers.
206
  - 'sourcerAnswers': The answer provided from one crowdsourcer.
207
+ - 's': Start of the answer. The first character of the answer in "text" (inclusive).
208
+ - 'e': End of the answer. The last character of the answer in "text" (exclusive).
209
+ - 'badQuestion': The crowdsourcer said that the question did not make sense.
210
  - 'noAnswer': The crowdsourcer said that there was no answer to the question in the text.
211
  - 'validated_answers': The answers from the validators.
212
+ - 's': Start of the answer. The first character of the answer in "text" (inclusive).
213
+ - 'e': End of the answer. The last character of the answer in "text" (exclusive).
214
+ - 'badQuestion': The validator said that the question did not make sense.
215
+ - 'noAnswer': The validator said that there was no answer to the question in the text.
216
+ - 'count': The number of validators that agreed with this answer.
217
 
218
  Configuration: split
219
+ - 'story_id': An identifier of the story
220
  - 'story_text': text of the story
221
  - 'question': A question about the story.
222
+ - 'answer_token_ranges': Word based indices to answers in story_text. E.g. 196:202,217:228. Multiple selections from the same answer are separated by `,`. The start is inclusive and the end is exclusive. The end may point to whitespace after a token.
223
 
224
  ### Data Splits
225
 
226
+ | name | train | validation | test |
227
+ |---------------|-----------:|-----------:|--------:|
228
+ | combined-csv | 119633 | | |
229
+ | combined-json | 12744 | | |
230
+ | split | 92549 | 5166 | 5126 |
231
 
232
  ## Dataset Creation
233
 
dataset_infos.json CHANGED
@@ -1 +1 @@
1
- {"combined-csv": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"story_id": {"dtype": "string", "id": null, "_type": "Value"}, "story_text": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer_char_ranges": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "newsqa", "config_name": "combined-csv", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 465942194, "num_examples": 119633, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 465942194, "size_in_bytes": 465942194}, "combined-json": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"storyId": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"q": {"dtype": "string", "id": null, "_type": "Value"}, "isAnswerAbsent": {"dtype": "int32", "id": null, "_type": "Value"}, "isQuestionBad": {"dtype": "int32", "id": null, "_type": "Value"}, "consensus": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "badQuestion": {"dtype": "bool", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}}, "answers": {"feature": {"sourcerAnswers": {"feature": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "validated_answers": {"feature": {"sourcerAnswers": {"feature": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}, "count": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "builder_name": "newsqa", "config_name": "combined-json", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 74492925, "num_examples": 12744, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 74492925, "size_in_bytes": 74492925}, "split": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"story_id": {"dtype": "string", "id": null, "_type": "Value"}, "story_text": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer_token_ranges": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "newsqa", "config_name": "split", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 362031288, "num_examples": 92549, "dataset_name": "newsqa"}, "test": {"name": "test", "num_bytes": 19763673, "num_examples": 5126, "dataset_name": "newsqa"}, "validation": {"name": "validation", "num_bytes": 19862778, "num_examples": 5166, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 401657739, "size_in_bytes": 401657739}}
1
+ {"combined-csv": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"story_id": {"dtype": "string", "id": null, "_type": "Value"}, "story_text": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer_char_ranges": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "newsqa", "config_name": "combined-csv", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 465942194, "num_examples": 119633, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 465942194, "size_in_bytes": 465942194}, "combined-json": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"storyId": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "type": {"dtype": "string", "id": null, "_type": "Value"}, "questions": {"feature": {"q": {"dtype": "string", "id": null, "_type": "Value"}, "isAnswerAbsent": {"dtype": "int32", "id": null, "_type": "Value"}, "isQuestionBad": {"dtype": "int32", "id": null, "_type": "Value"}, "consensus": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "badQuestion": {"dtype": "bool", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}}, "answers": {"feature": {"sourcerAnswers": {"feature": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "badQuestion": {"dtype": "bool", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}, "validated_answers": {"feature": {"s": {"dtype": "int32", "id": null, "_type": "Value"}, "e": {"dtype": "int32", "id": null, "_type": "Value"}, "badQuestion": {"dtype": "bool", "id": null, "_type": "Value"}, "noAnswer": {"dtype": "bool", "id": null, "_type": "Value"}, "count": {"dtype": "int32", "id": null, "_type": "Value"}}, "length": -1, "id": null, "_type": "Sequence"}}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "newsqa", "config_name": "combined-json", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 68667276, "num_examples": 12744, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 68667276, "size_in_bytes": 68667276}, "split": {"description": "NewsQA is a challenging machine comprehension dataset of over 100,000 human-generated question-answer pairs. Crowdworkers supply questions and answers based on a set of over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles.\n", "citation": "@inproceedings{trischler2017newsqa,\n title={NewsQA: A Machine Comprehension Dataset},\n author={Trischler, Adam and Wang, Tong and Yuan, Xingdi and Harris, Justin and Sordoni, Alessandro and Bachman, Philip and Suleman, Kaheer},\n booktitle={Proceedings of the 2nd Workshop on Representation Learning for NLP},\n pages={191--200},\n year={2017}\n}\n\n", "homepage": "https://www.microsoft.com/en-us/research/project/newsqa-dataset/", "license": "NewsQA CodeCopyright (c) Microsoft CorporationAll rights reserved.MIT LicensePermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED *AS IS*, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\u00a9 2020 GitHub, Inc.", "features": {"story_id": {"dtype": "string", "id": null, "_type": "Value"}, "story_text": {"dtype": "string", "id": null, "_type": "Value"}, "question": {"dtype": "string", "id": null, "_type": "Value"}, "answer_token_ranges": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "newsqa", "config_name": "split", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 362031288, "num_examples": 92549, "dataset_name": "newsqa"}, "test": {"name": "test", "num_bytes": 19763673, "num_examples": 5126, "dataset_name": "newsqa"}, "validation": {"name": "validation", "num_bytes": 19862778, "num_examples": 5166, "dataset_name": "newsqa"}}, "download_checksums": {}, "download_size": 0, "post_processing_size": null, "dataset_size": 401657739, "size_in_bytes": 401657739}}
newsqa.py CHANGED
@@ -18,6 +18,7 @@
18
  import csv
19
  import json
20
  import os
 
21
 
22
  import datasets
23
 
@@ -78,19 +79,17 @@ class Newsqa(datasets.GeneratorBasedBuilder):
78
 
79
  @property
80
  def manual_download_instructions(self):
81
- return """ Due to legal restrictions with the CNN data and data extraction. The data has to be downloaded from several sources and compiled as per the instructions by Authors. \
82
- Upon obtaining the resulting data folders, it can be loaded easily using the datasets API. \
83
- Please refer to (https://github.com/Maluuba/newsqa) to download data from Microsoft Reseach site (https://msropendata.com/datasets/939b1042-6402-4697-9c15-7a28de7e1321) \
84
- and a CNN datasource (https://cs.nyu.edu/~kcho/DMQA/) and run the scripts present here (https://github.com/Maluuba/newsqa).\
85
- This will generate a folder named "split-data" and a file named "combined-newsqa-data-v1.csv".\
86
- Copy the above folder and the file to a directory where you want to store them locally.\
87
- They must be used to load the dataset via `datasets.load_dataset("narqa", data_dir="<path/to/folder>")."""
 
88
 
89
  def _info(self):
90
-
91
- if (
92
- self.config.name == "combined-csv"
93
- ): # This is the name of the configuration selected in BUILDER_CONFIGS above
94
  features = datasets.Features(
95
  {
96
  "story_id": datasets.Value("string"),
@@ -99,9 +98,7 @@ class Newsqa(datasets.GeneratorBasedBuilder):
99
  "answer_char_ranges": datasets.Value("string"),
100
  }
101
  )
102
- elif (
103
- self.config.name == "combined-json"
104
- ): # This is an example to show how to have different features for "first_domain" and "second_domain"
105
  features = datasets.Features(
106
  {
107
  "storyId": datasets.Value("string"),
@@ -112,20 +109,19 @@ class Newsqa(datasets.GeneratorBasedBuilder):
112
  "q": datasets.Value("string"),
113
  "isAnswerAbsent": datasets.Value("int32"),
114
  "isQuestionBad": datasets.Value("int32"),
115
- "consensus": datasets.Features(
116
- {
117
- "s": datasets.Value("int32"),
118
- "e": datasets.Value("int32"),
119
- "badQuestion": datasets.Value("bool"),
120
- "noAnswer": datasets.Value("bool"),
121
- }
122
- ),
123
  "answers": datasets.features.Sequence(
124
  {
125
  "sourcerAnswers": datasets.features.Sequence(
126
  {
127
  "s": datasets.Value("int32"),
128
  "e": datasets.Value("int32"),
 
129
  "noAnswer": datasets.Value("bool"),
130
  }
131
  ),
@@ -133,21 +129,18 @@ class Newsqa(datasets.GeneratorBasedBuilder):
133
  ),
134
  "validated_answers": datasets.features.Sequence(
135
  {
136
- "sourcerAnswers": datasets.features.Sequence(
137
- {
138
- "s": datasets.Value("int32"),
139
- "e": datasets.Value("int32"),
140
- "noAnswer": datasets.Value("bool"),
141
- "count": datasets.Value("int32"),
142
- }
143
- ),
144
  }
145
  ),
146
  }
147
  ),
148
  }
149
  )
150
- else: # This is the name of the configuration selected in BUILDER_CONFIGS above
151
  features = datasets.Features(
152
  {
153
  "story_id": datasets.Value("string"),
@@ -156,20 +149,12 @@ class Newsqa(datasets.GeneratorBasedBuilder):
156
  "answer_token_ranges": datasets.Value("string"),
157
  }
158
  )
 
159
  return datasets.DatasetInfo(
160
- # This is the description that will appear on the datasets page.
161
  description=_DESCRIPTION,
162
- # This defines the different columns of the dataset and their types
163
- features=features, # Here we define them above because they are different between the two configurations
164
- # If there's a common (input, target) tuple from the features,
165
- # specify them here. They'll be used if as_supervised=True in
166
- # builder.as_dataset.
167
- supervised_keys=None,
168
- # Homepage of the dataset for documentation
169
  homepage=_HOMEPAGE,
170
- # License for the dataset if available
171
  license=_LICENSE,
172
- # Citation for the dataset
173
  citation=_CITATION,
174
  )
175
 
@@ -177,10 +162,6 @@ class Newsqa(datasets.GeneratorBasedBuilder):
177
  """Returns SplitGenerators."""
178
 
179
  path_to_manual_folder = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
180
- combined_file_csv = os.path.join(path_to_manual_folder, "combined-newsqa-data-v1.csv")
181
- combined_file_json = os.path.join(path_to_manual_folder, "combined-newsqa-data-v1.json")
182
- split_files = os.path.join(path_to_manual_folder, "split_data")
183
-
184
  if not os.path.exists(path_to_manual_folder):
185
  raise FileNotFoundError(
186
  f"{path_to_manual_folder} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('newsqa', data_dir=...)` that includes files from the Manual download instructions: {self.manual_download_instructions}"
@@ -190,9 +171,8 @@ class Newsqa(datasets.GeneratorBasedBuilder):
190
  return [
191
  datasets.SplitGenerator(
192
  name=datasets.Split.TRAIN,
193
- # These kwargs will be passed to _generate_examples
194
  gen_kwargs={
195
- "filepath": combined_file_csv,
196
  "split": "combined",
197
  },
198
  )
@@ -201,18 +181,17 @@ class Newsqa(datasets.GeneratorBasedBuilder):
201
  return [
202
  datasets.SplitGenerator(
203
  name=datasets.Split.TRAIN,
204
- # These kwargs will be passed to _generate_examples
205
  gen_kwargs={
206
- "filepath": combined_file_json,
207
  "split": "combined",
208
  },
209
  )
210
  ]
211
  else:
 
212
  return [
213
  datasets.SplitGenerator(
214
  name=datasets.Split.TRAIN,
215
- # These kwargs will be passed to _generate_examples
216
  gen_kwargs={
217
  "filepath": os.path.join(split_files, "train.csv"),
218
  "split": "train",
@@ -220,12 +199,10 @@ class Newsqa(datasets.GeneratorBasedBuilder):
220
  ),
221
  datasets.SplitGenerator(
222
  name=datasets.Split.TEST,
223
- # These kwargs will be passed to _generate_examples
224
  gen_kwargs={"filepath": os.path.join(split_files, "test.csv"), "split": "test"},
225
  ),
226
  datasets.SplitGenerator(
227
  name=datasets.Split.VALIDATION,
228
- # These kwargs will be passed to _generate_examples
229
  gen_kwargs={
230
  "filepath": os.path.join(split_files, "dev.csv"),
231
  "split": "dev",
@@ -255,61 +232,52 @@ class Newsqa(datasets.GeneratorBasedBuilder):
255
  with open(filepath, encoding="utf-8") as f:
256
  d = json.load(f)
257
  data = d["data"]
258
-
259
- for id_, iter in enumerate(data):
260
-
261
  questions = []
262
-
263
- for ques in iter["questions"]:
264
- dict1 = {}
265
- dict1["q"] = ques["q"]
266
  if "isAnswerAbsent" in ques.keys():
267
- dict1["isAnswerAbsent"] = ques["isAnswerAbsent"]
268
  else:
269
- dict1["isAnswerAbsent"] = 0.0
270
  if "isQuestionBad" in ques.keys():
271
- dict1["isQuestionBad"] = ques["isQuestionBad"]
272
  else:
273
- dict1["isQuestionBad"] = 0.0
274
- dict1["consensus"] = {"s": 0, "e": 0, "badQuestion": False, "noAnswer": False}
275
-
276
- for keys in ques["consensus"]:
277
- dict1["consensus"][keys] = ques["consensus"][keys]
278
-
279
  answers = []
280
  for ans in ques["answers"]:
281
- dict2 = {}
282
- dict2["sourcerAnswers"] = []
283
- for index, i in enumerate(ans["sourcerAnswers"]):
284
- dict_temp = {"s": 0, "e": 0, "noAnswer": False}
285
- for keys in i.keys():
286
- dict_temp[keys] = i[keys]
287
- dict2["sourcerAnswers"].append(dict_temp)
288
-
289
- answers.append(dict2)
290
-
291
- dict1["answers"] = answers
292
-
293
- validated_answers = []
294
- for ans in ques["answers"]:
295
- dict2 = {}
296
- dict2["sourcerAnswers"] = []
297
- for index, i in enumerate(ans["sourcerAnswers"]):
298
- dict_temp = {"s": 0, "e": 0, "noAnswer": False, "count": 0}
299
- for keys in i.keys():
300
- dict_temp[keys] = i[keys]
301
-
302
- dict2["sourcerAnswers"].append(dict_temp)
303
-
304
- validated_answers.append(dict2)
305
 
306
- dict1["validated_answers"] = validated_answers
307
- questions.append(dict1)
308
 
309
  yield id_, {
310
- "storyId": iter["storyId"],
311
- "text": iter["text"],
312
- "type": iter["type"],
313
  "questions": questions,
314
  }
315
  else:
18
  import csv
19
  import json
20
  import os
21
+ from textwrap import dedent
22
 
23
  import datasets
24
 
79
 
80
  @property
81
  def manual_download_instructions(self):
82
+ return dedent(
83
+ """\
84
+ Due to legal restrictions with the CNN data and data extraction. The data has to be downloaded from several sources and compiled as per the instructions by Authors.
85
+ Upon obtaining the resulting data folders, it can be loaded easily using the datasets API.
86
+ Please refer to (https://github.com/Maluuba/newsqa) to download data from Microsoft Reseach site (https://msropendata.com/datasets/939b1042-6402-4697-9c15-7a28de7e1321) and a CNN datasource (https://cs.nyu.edu/~kcho/DMQA/) and run the scripts present here (https://github.com/Maluuba/newsqa).
87
+ This will generate a folder named "split-data" and a file named "combined-newsqa-data-v1.csv".
88
+ Copy the above folder and the file to a directory where you want to store them locally."""
89
+ )
90
 
91
  def _info(self):
92
+ if self.config.name == "combined-csv":
 
 
 
93
  features = datasets.Features(
94
  {
95
  "story_id": datasets.Value("string"),
98
  "answer_char_ranges": datasets.Value("string"),
99
  }
100
  )
101
+ elif self.config.name == "combined-json":
 
 
102
  features = datasets.Features(
103
  {
104
  "storyId": datasets.Value("string"),
109
  "q": datasets.Value("string"),
110
  "isAnswerAbsent": datasets.Value("int32"),
111
  "isQuestionBad": datasets.Value("int32"),
112
+ "consensus": {
113
+ "s": datasets.Value("int32"),
114
+ "e": datasets.Value("int32"),
115
+ "badQuestion": datasets.Value("bool"),
116
+ "noAnswer": datasets.Value("bool"),
117
+ },
 
 
118
  "answers": datasets.features.Sequence(
119
  {
120
  "sourcerAnswers": datasets.features.Sequence(
121
  {
122
  "s": datasets.Value("int32"),
123
  "e": datasets.Value("int32"),
124
+ "badQuestion": datasets.Value("bool"),
125
  "noAnswer": datasets.Value("bool"),
126
  }
127
  ),
129
  ),
130
  "validated_answers": datasets.features.Sequence(
131
  {
132
+ "s": datasets.Value("int32"),
133
+ "e": datasets.Value("int32"),
134
+ "badQuestion": datasets.Value("bool"),
135
+ "noAnswer": datasets.Value("bool"),
136
+ "count": datasets.Value("int32"),
 
 
 
137
  }
138
  ),
139
  }
140
  ),
141
  }
142
  )
143
+ else:
144
  features = datasets.Features(
145
  {
146
  "story_id": datasets.Value("string"),
149
  "answer_token_ranges": datasets.Value("string"),
150
  }
151
  )
152
+
153
  return datasets.DatasetInfo(
 
154
  description=_DESCRIPTION,
155
+ features=features,
 
 
 
 
 
 
156
  homepage=_HOMEPAGE,
 
157
  license=_LICENSE,
 
158
  citation=_CITATION,
159
  )
160
 
162
  """Returns SplitGenerators."""
163
 
164
  path_to_manual_folder = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
 
 
 
 
165
  if not os.path.exists(path_to_manual_folder):
166
  raise FileNotFoundError(
167
  f"{path_to_manual_folder} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('newsqa', data_dir=...)` that includes files from the Manual download instructions: {self.manual_download_instructions}"
171
  return [
172
  datasets.SplitGenerator(
173
  name=datasets.Split.TRAIN,
 
174
  gen_kwargs={
175
+ "filepath": os.path.join(path_to_manual_folder, "combined-newsqa-data-v1.csv"),
176
  "split": "combined",
177
  },
178
  )
181
  return [
182
  datasets.SplitGenerator(
183
  name=datasets.Split.TRAIN,
 
184
  gen_kwargs={
185
+ "filepath": os.path.join(path_to_manual_folder, "combined-newsqa-data-v1.json"),
186
  "split": "combined",
187
  },
188
  )
189
  ]
190
  else:
191
+ split_files = os.path.join(path_to_manual_folder, "split_data")
192
  return [
193
  datasets.SplitGenerator(
194
  name=datasets.Split.TRAIN,
 
195
  gen_kwargs={
196
  "filepath": os.path.join(split_files, "train.csv"),
197
  "split": "train",
199
  ),
200
  datasets.SplitGenerator(
201
  name=datasets.Split.TEST,
 
202
  gen_kwargs={"filepath": os.path.join(split_files, "test.csv"), "split": "test"},
203
  ),
204
  datasets.SplitGenerator(
205
  name=datasets.Split.VALIDATION,
 
206
  gen_kwargs={
207
  "filepath": os.path.join(split_files, "dev.csv"),
208
  "split": "dev",
232
  with open(filepath, encoding="utf-8") as f:
233
  d = json.load(f)
234
  data = d["data"]
235
+ for id_, item in enumerate(data):
236
+ # questions
 
237
  questions = []
238
+ for ques in item["questions"]:
239
+ question = {"q": ques["q"]}
 
 
240
  if "isAnswerAbsent" in ques.keys():
241
+ question["isAnswerAbsent"] = ques["isAnswerAbsent"]
242
  else:
243
+ question["isAnswerAbsent"] = 0.0
244
  if "isQuestionBad" in ques.keys():
245
+ question["isQuestionBad"] = ques["isQuestionBad"]
246
  else:
247
+ question["isQuestionBad"] = 0.0
248
+ question["consensus"] = {"s": 0, "e": 0, "badQuestion": False, "noAnswer": False}
249
+ # consensus
250
+ for key in ques["consensus"]:
251
+ question["consensus"][key] = ques["consensus"][key]
252
+ # answers
253
  answers = []
254
  for ans in ques["answers"]:
255
+ answer = {"sourcerAnswers": []}
256
+ for sourcer_answer in ans["sourcerAnswers"]:
257
+ dict_temp = {"s": 0, "e": 0, "badQuestion": False, "noAnswer": False}
258
+ for key in sourcer_answer.keys():
259
+ dict_temp[key] = sourcer_answer[key]
260
+ answer["sourcerAnswers"].append(dict_temp)
261
+ answers.append(answer)
262
+ question["answers"] = answers
263
+ # validated_answers
264
+ default_validated_answer = {
265
+ "s": 0,
266
+ "e": 0,
267
+ "badQuestion": False,
268
+ "noAnswer": False,
269
+ "count": 0,
270
+ }
271
+ validated_answers = ques.get("validatedAnswers", []) # not always present
272
+ validated_answers = [{**default_validated_answer, **val_ans} for val_ans in validated_answers]
273
+ question["validated_answers"] = validated_answers
 
 
 
 
 
274
 
275
+ questions.append(question)
 
276
 
277
  yield id_, {
278
+ "storyId": item["storyId"],
279
+ "text": item["text"],
280
+ "type": item["type"],
281
  "questions": questions,
282
  }
283
  else: