Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
parquet-converter commited on
Commit
a722e09
1 Parent(s): 1dffea6

Update parquet files

Browse files
README.md DELETED
@@ -1,255 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - crowdsourced
4
- language_creators:
5
- - found
6
- language:
7
- - en
8
- license:
9
- - apache-2.0
10
- multilinguality:
11
- - monolingual
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text2text-generation
18
- task_ids:
19
- - abstractive-qa
20
- paperswithcode_id: narrativeqa
21
- pretty_name: NarrativeQA
22
- dataset_info:
23
- features:
24
- - name: document
25
- struct:
26
- - name: id
27
- dtype: string
28
- - name: kind
29
- dtype: string
30
- - name: url
31
- dtype: string
32
- - name: file_size
33
- dtype: int32
34
- - name: word_count
35
- dtype: int32
36
- - name: start
37
- dtype: string
38
- - name: end
39
- dtype: string
40
- - name: summary
41
- struct:
42
- - name: text
43
- dtype: string
44
- - name: tokens
45
- sequence: string
46
- - name: url
47
- dtype: string
48
- - name: title
49
- dtype: string
50
- - name: text
51
- dtype: string
52
- - name: question
53
- struct:
54
- - name: text
55
- dtype: string
56
- - name: tokens
57
- sequence: string
58
- - name: answers
59
- list:
60
- - name: text
61
- dtype: string
62
- - name: tokens
63
- sequence: string
64
- splits:
65
- - name: train
66
- num_bytes: 11565035136
67
- num_examples: 32747
68
- - name: test
69
- num_bytes: 3549964281
70
- num_examples: 10557
71
- - name: validation
72
- num_bytes: 1211859490
73
- num_examples: 3461
74
- download_size: 192528922
75
- dataset_size: 16326858907
76
- ---
77
-
78
- # Dataset Card for Narrative QA
79
-
80
- ## Table of Contents
81
- - [Dataset Description](#dataset-description)
82
- - [Dataset Summary](#dataset-summary)
83
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
84
- - [Languages](#languages)
85
- - [Dataset Structure](#dataset-structure)
86
- - [Data Instances](#data-instances)
87
- - [Data Fields](#data-fields)
88
- - [Data Splits](#data-splits)
89
- - [Dataset Creation](#dataset-creation)
90
- - [Curation Rationale](#curation-rationale)
91
- - [Source Data](#source-data)
92
- - [Annotations](#annotations)
93
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
94
- - [Considerations for Using the Data](#considerations-for-using-the-data)
95
- - [Social Impact of Dataset](#social-impact-of-dataset)
96
- - [Discussion of Biases](#discussion-of-biases)
97
- - [Other Known Limitations](#other-known-limitations)
98
- - [Additional Information](#additional-information)
99
- - [Dataset Curators](#dataset-curators)
100
- - [Licensing Information](#licensing-information)
101
- - [Citation Information](#citation-information)
102
- - [Contributions](#contributions)
103
-
104
- ## Dataset Description
105
-
106
- - **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa)
107
- - **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa)
108
- - **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf)
109
- - **Leaderboard:**
110
- - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
111
-
112
- ### Dataset Summary
113
-
114
- NarrativeQA is an English-lanaguage dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents.
115
-
116
- ### Supported Tasks and Leaderboards
117
-
118
- The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question.
119
-
120
- ### Languages
121
-
122
- English
123
-
124
- ## Dataset Structure
125
-
126
- ### Data Instances
127
-
128
- A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided.
129
-
130
- A typical example looks like this:
131
- ```
132
- {
133
- "document": {
134
- "id": "23jncj2n3534563110",
135
- "kind": "movie",
136
- "url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html",
137
- "file_size": 80473,
138
- "word_count": 41000,
139
- "start": "MOVIE screenplay by",
140
- "end": ". THE END",
141
- "summary": {
142
- "text": "Joe Bloggs begins his journey exploring...",
143
- "tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...],
144
- "url": "http://en.wikipedia.org/wiki/Name_of_Movie",
145
- "title": "Name of Movie (film)"
146
- },
147
- "text": "MOVIE screenplay by John Doe\nSCENE 1..."
148
- },
149
- "question": {
150
- "text": "Where does Joe Bloggs live?",
151
- "tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"],
152
- },
153
- "answers": [
154
- {"text": "At home", "tokens": ["At", "home"]},
155
- {"text": "His house", "tokens": ["His", "house"]}
156
- ]
157
- }
158
- ```
159
-
160
- ### Data Fields
161
-
162
- - `document.id` - Unique ID for the story.
163
- - `document.kind` - "movie" or "gutenberg" depending on the source of the story.
164
- - `document.url` - The URL where the story was downloaded from.
165
- - `document.file_size` - File size (in bytes) of the story.
166
- - `document.word_count` - Number of tokens in the story.
167
- - `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified.
168
- - `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified.
169
- - `document.summary.text` - Text of the wikipedia summary of the story.
170
- - `document.summary.tokens` - Tokenized version of `document.summary.text`.
171
- - `document.summary.url` - Wikipedia URL of the summary.
172
- - `document.summary.title` - Wikipedia Title of the summary.
173
- - `question` - `{"text":"...", "tokens":[...]}` for the question about the story.
174
- - `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question.
175
-
176
- ### Data Splits
177
-
178
- The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split):
179
-
180
- | Train | Valid | Test |
181
- | ------ | ----- | ----- |
182
- | 32747 | 3461 | 10557 |
183
-
184
- ## Dataset Creation
185
-
186
- ### Curation Rationale
187
-
188
- [More Information Needed]
189
-
190
- ### Source Data
191
-
192
- #### Initial Data Collection and Normalization
193
- Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)).
194
-
195
- #### Who are the source language producers?
196
-
197
- The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions.
198
-
199
- ### Annotations
200
-
201
- #### Annotation process
202
-
203
- Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors.
204
-
205
- #### Who are the annotators?
206
-
207
- Amazon Mechanical Turk workers.
208
-
209
- ### Personal and Sensitive Information
210
-
211
- None
212
-
213
- ## Considerations for Using the Data
214
-
215
- ### Social Impact of Dataset
216
-
217
- [More Information Needed]
218
-
219
- ### Discussion of Biases
220
-
221
- [More Information Needed]
222
-
223
- ### Other Known Limitations
224
-
225
- [More Information Needed]
226
-
227
- ## Additional Information
228
-
229
- ### Dataset Curators
230
-
231
- [More Information Needed]
232
-
233
- ### Licensing Information
234
-
235
- The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE).
236
-
237
- ### Citation Information
238
-
239
- ```
240
- @article{narrativeqa,
241
- author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
242
- Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
243
- Edward Grefenstette},
244
- title = {The {NarrativeQA} Reading Comprehension Challenge},
245
- journal = {Transactions of the Association for Computational Linguistics},
246
- url = {https://TBD},
247
- volume = {TBD},
248
- year = {2018},
249
- pages = {TBD},
250
- }
251
- ```
252
-
253
- ### Contributions
254
-
255
- Thanks to [@ghomasHudson](https://github.com/ghomasHudson) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "The NarrativeQA dataset for question answering on long documents (movie scripts, books). It includes the list of documents with Wikipedia summaries, links to full stories, and questions and answers.\n", "citation": "@article{narrativeqa,\nauthor = {Tom\\'a\\v s Ko\\v cisk\\'y and Jonathan Schwarz and Phil Blunsom and\n Chris Dyer and Karl Moritz Hermann and G\\'abor Melis and\n Edward Grefenstette},\ntitle = {The {NarrativeQA} Reading Comprehension Challenge},\njournal = {Transactions of the Association for Computational Linguistics},\nurl = {https://TBD},\nvolume = {TBD},\nyear = {2018},\npages = {TBD},\n}\n", "homepage": "https://github.com/deepmind/narrativeqa", "license": "", "features": {"document": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "kind": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "file_size": {"dtype": "int32", "id": null, "_type": "Value"}, "word_count": {"dtype": "int32", "id": null, "_type": "Value"}, "start": {"dtype": "string", "id": null, "_type": "Value"}, "end": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "question": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "answers": [{"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "builder_name": "narrative_qa", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11565035136, "num_examples": 32747, "dataset_name": "narrative_qa"}, "test": {"name": "test", "num_bytes": 3549964281, "num_examples": 10557, "dataset_name": "narrative_qa"}, "validation": {"name": "validation", "num_bytes": 1211859490, "num_examples": 3461, "dataset_name": "narrative_qa"}}, "download_checksums": {"https://storage.googleapis.com/huggingface-nlp/datasets/narrative_qa/narrativeqa_full_text.zip": {"num_bytes": 187416846, "checksum": "3e179a579d348da37b4929f20ece277a721f853fdc5efc11f915904de2a71727"}, "https://github.com/deepmind/narrativeqa/archive/master.zip": {"num_bytes": 5112076, "checksum": "d9fc92d5f53409f845ba44780e6689676d879c739589861b4805064513d1476b"}}, "download_size": 192528922, "post_processing_size": null, "dataset_size": 16326858907, "size_in_bytes": 16519387829}}
 
 
default/partial-test/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:74b0e6adce3947e4c480ae02986021dbf6d263028b1511c46c449720cec74886
3
+ size 19835606
default/partial-test/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cf9ed8162ae5ad97dad3fea5e9cdfd69045940420ed908e9f2f131caab243ec
3
+ size 15996325
default/partial-test/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:43e3d70ba608217302850e51b8a053448e43dde1fa7775caedb99ae312862afb
3
+ size 14075951
default/partial-test/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b9d2ed73ce63d08f6a42917451abd05864bea04ae2868d71a89e30a659771af
3
+ size 19712927
default/partial-test/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aadcdbdf04cec272a330bb652b726f8c4f0d035633079e3f10ded88312f8d67a
3
+ size 15217115
default/partial-test/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:46d5279958c822c97fefff934d25634525cfac638d71c46020afa36bb09a7b09
3
+ size 9980024
default/partial-train/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1347ba9f62f6fb37a84ae43a17b9f6bb3cc6b22d7ed5f4912743292a33160e67
3
+ size 20437890
default/partial-train/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d5d40133a5daae64b54b54bbd59b0dab703d4abc8bfdbfd922b1b7a440a551f
3
+ size 16166044
default/partial-train/0002.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c31dfa1d4816fbabbe5401ae7dfffee5f60cc6b47dc7e2d4877c0651007c594d
3
+ size 15405402
default/partial-train/0003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:568fbb96c55713e57c3168937ac49146486a6be3546c565686f0b4c56a6df6c6
3
+ size 14010359
default/partial-train/0004.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e85ad0f7c52752d8bcf89725a802f14520460834fcb85a9e915c4643a2e4f981
3
+ size 12746496
default/partial-train/0005.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b2672bc224e631af15dba829bb8cde20876838fb2a8dfa91414ccdc2c3ce126
3
+ size 21249046
default/partial-train/0006.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a82647578d6654c3797bb4093927044568b1b7f5b069f60f7957ca9e6ab13bbc
3
+ size 15261913
default/partial-train/0007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b2919aff9eb012561ef9d87d764454fc804bc93621f9c3fb7c9cb32f6128e7a
3
+ size 20063911
default/partial-validation/0000.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd5bb985e8af9cf493ca19056d60e866253007d700906830eaa912b7cf7a0809
3
+ size 18932391
default/partial-validation/0001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2385c644d35764fc2f58a497d99f9639865678cd6e86196652aede9c18f481e6
3
+ size 11153717
narrativeqa.py DELETED
@@ -1,159 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """NarrativeQA Reading Comprehension Challenge"""
16
-
17
-
18
- import csv
19
- import os
20
-
21
- import datasets
22
-
23
-
24
- _CITATION = """\
25
- @article{narrativeqa,
26
- author = {Tom\\'a\\v s Ko\\v cisk\\'y and Jonathan Schwarz and Phil Blunsom and
27
- Chris Dyer and Karl Moritz Hermann and G\\'abor Melis and
28
- Edward Grefenstette},
29
- title = {The {NarrativeQA} Reading Comprehension Challenge},
30
- journal = {Transactions of the Association for Computational Linguistics},
31
- url = {https://TBD},
32
- volume = {TBD},
33
- year = {2018},
34
- pages = {TBD},
35
- }
36
- """
37
-
38
- _DESCRIPTION = """\
39
- The NarrativeQA dataset for question answering on long documents (movie scripts, books). It includes the list of documents with Wikipedia summaries, links to full stories, and questions and answers.
40
- """
41
-
42
- _URLS = {
43
- "full_text": "https://storage.googleapis.com/huggingface-nlp/datasets/narrative_qa/narrativeqa_full_text.zip",
44
- "repo": "https://github.com/deepmind/narrativeqa/archive/master.zip",
45
- }
46
-
47
-
48
- class NarrativeQa(datasets.GeneratorBasedBuilder):
49
- """NarrativeQA: Question answering on long-documents"""
50
-
51
- def _info(self):
52
- return datasets.DatasetInfo(
53
- description=_DESCRIPTION,
54
- citation=_CITATION,
55
- features=datasets.Features(
56
- {
57
- "document": {
58
- "id": datasets.Value("string"),
59
- "kind": datasets.Value("string"),
60
- "url": datasets.Value("string"),
61
- "file_size": datasets.Value("int32"),
62
- "word_count": datasets.Value("int32"),
63
- "start": datasets.Value("string"),
64
- "end": datasets.Value("string"),
65
- "summary": {
66
- "text": datasets.Value("string"),
67
- "tokens": datasets.features.Sequence(datasets.Value("string")),
68
- "url": datasets.Value("string"),
69
- "title": datasets.Value("string"),
70
- },
71
- "text": datasets.Value("string"),
72
- },
73
- "question": {
74
- "text": datasets.Value("string"),
75
- "tokens": datasets.features.Sequence(datasets.Value("string")),
76
- },
77
- "answers": [
78
- {
79
- "text": datasets.Value("string"),
80
- "tokens": datasets.features.Sequence(datasets.Value("string")),
81
- }
82
- ],
83
- }
84
- ),
85
- homepage="https://github.com/deepmind/narrativeqa",
86
- )
87
-
88
- def _split_generators(self, dl_manager):
89
- """Returns SplitGenerators."""
90
-
91
- dl_dir = dl_manager.download_and_extract(_URLS)
92
- dl_dir["repo"] = os.path.join(dl_dir["repo"], "narrativeqa-master")
93
-
94
- return [
95
- datasets.SplitGenerator(
96
- name=datasets.Split.TRAIN,
97
- gen_kwargs={"repo_dir": dl_dir["repo"], "full_text_dir": dl_dir["full_text"], "split": "train"},
98
- ),
99
- datasets.SplitGenerator(
100
- name=datasets.Split.TEST,
101
- gen_kwargs={"repo_dir": dl_dir["repo"], "full_text_dir": dl_dir["full_text"], "split": "test"},
102
- ),
103
- datasets.SplitGenerator(
104
- name=datasets.Split.VALIDATION,
105
- gen_kwargs={"repo_dir": dl_dir["repo"], "full_text_dir": dl_dir["full_text"], "split": "valid"},
106
- ),
107
- ]
108
-
109
- def _generate_examples(self, repo_dir, full_text_dir, split):
110
- """Yields examples."""
111
- documents = {}
112
- with open(os.path.join(repo_dir, "documents.csv"), encoding="utf-8") as f:
113
- reader = csv.DictReader(f)
114
- for row in reader:
115
- if row["set"] != split:
116
- continue
117
- documents[row["document_id"]] = row
118
-
119
- summaries = {}
120
- with open(os.path.join(repo_dir, "third_party", "wikipedia", "summaries.csv"), encoding="utf-8") as f:
121
- reader = csv.DictReader(f)
122
- for row in reader:
123
- if row["set"] != split:
124
- continue
125
- summaries[row["document_id"]] = row
126
-
127
- with open(os.path.join(repo_dir, "qaps.csv"), encoding="utf-8") as f:
128
- reader = csv.DictReader(f)
129
- for id_, row in enumerate(reader):
130
- if row["set"] != split:
131
- continue
132
- document_id = row["document_id"]
133
- document = documents[document_id]
134
- summary = summaries[document_id]
135
- full_text = open(os.path.join(full_text_dir, document_id + ".content"), encoding="latin-1").read()
136
- res = {
137
- "document": {
138
- "id": document["document_id"],
139
- "kind": document["kind"],
140
- "url": document["story_url"],
141
- "file_size": document["story_file_size"],
142
- "word_count": document["story_word_count"],
143
- "start": document["story_start"],
144
- "end": document["story_end"],
145
- "summary": {
146
- "text": summary["summary"],
147
- "tokens": summary["summary_tokenized"].split(),
148
- "url": document["wiki_url"],
149
- "title": document["wiki_title"],
150
- },
151
- "text": full_text,
152
- },
153
- "question": {"text": row["question"], "tokens": row["question_tokenized"].split()},
154
- "answers": [
155
- {"text": row["answer1"], "tokens": row["answer1_tokenized"].split()},
156
- {"text": row["answer2"], "tokens": row["answer2_tokenized"].split()},
157
- ],
158
- }
159
- yield id_, res