system HF staff commited on
Commit
a82dfbc
0 Parent(s):

Update files from the datasets library (from 1.3.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.3.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - crowdsourced
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - apache-2-0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ task_categories:
17
+ - question-answering
18
+ task_ids:
19
+ - abstractive-qa
20
+ ---
21
+
22
+ # Dataset Card for Narrative QA Manual
23
+
24
+ ## Table of Contents
25
+ - [Dataset Description](#dataset-description)
26
+ - [Dataset Summary](#dataset-summary)
27
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
28
+ - [Languages](#languages)
29
+ - [Dataset Structure](#dataset-structure)
30
+ - [Data Instances](#data-instances)
31
+ - [Data Fields](#data-fields)
32
+ - [Data Splits](#data-splits)
33
+ - [Dataset Creation](#dataset-creation)
34
+ - [Curation Rationale](#curation-rationale)
35
+ - [Source Data](#source-data)
36
+ - [Annotations](#annotations)
37
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
38
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
39
+ - [Social Impact of Dataset](#social-impact-of-dataset)
40
+ - [Discussion of Biases](#discussion-of-biases)
41
+ - [Other Known Limitations](#other-known-limitations)
42
+ - [Additional Information](#additional-information)
43
+ - [Dataset Curators](#dataset-curators)
44
+ - [Licensing Information](#licensing-information)
45
+ - [Citation Information](#citation-information)
46
+ - [Contributions](#contributions)
47
+
48
+ ## Dataset Description
49
+
50
+ - **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa)
51
+ - **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa)
52
+ - **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf)
53
+ - **Leaderboard:**
54
+ - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
55
+
56
+ ### Dataset Summary
57
+
58
+ NarrativeQA Manual is an English-language dataset of stories and corresponding questions designed to test reading comprehension, especially on long documents. THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, the links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" in the root directory and downloads the stories there. This folder containing the stories can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ The dataset is used to test reading comprehension. There are 2 tasks proposed in the paper: "summaries only" and "stories only", depending on whether the human-generated summary or the full story text is used to answer the question.
63
+
64
+ ### Languages
65
+
66
+ English
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ A typical data point consists of a question and answer pair along with a summary/story which can be used to answer the question. Additional information such as the url, word count, wikipedia page, are also provided.
73
+
74
+ A typical example looks like this:
75
+ ```
76
+ {
77
+ "document": {
78
+ "id": "23jncj2n3534563110",
79
+ "kind": "movie",
80
+ "url": "https://www.imsdb.com/Movie%20Scripts/Name%20of%20Movie.html",
81
+ "file_size": 80473,
82
+ "word_count": 41000,
83
+ "start": "MOVIE screenplay by",
84
+ "end": ". THE END",
85
+ "summary": {
86
+ "text": "Joe Bloggs begins his journey exploring...",
87
+ "tokens": ["Joe", "Bloggs", "begins", "his", "journey", "exploring",...],
88
+ "url": "http://en.wikipedia.org/wiki/Name_of_Movie",
89
+ "title": "Name of Movie (film)"
90
+ },
91
+ "text": "MOVIE screenplay by John Doe\nSCENE 1..."
92
+ },
93
+ "question": {
94
+ "text": "Where does Joe Bloggs live?",
95
+ "tokens": ["Where", "does", "Joe", "Bloggs", "live", "?"],
96
+ },
97
+ "answers": [
98
+ {"text": "At home", "tokens": ["At", "home"]},
99
+ {"text": "His house", "tokens": ["His", "house"]}
100
+ ]
101
+ }
102
+ ```
103
+
104
+ ### Data Fields
105
+
106
+ - `document.id` - Unique ID for the story.
107
+ - `document.kind` - "movie" or "gutenberg" depending on the source of the story.
108
+ - `document.url` - The URL where the story was downloaded from.
109
+ - `document.file_size` - File size (in bytes) of the story.
110
+ - `document.word_count` - Number of tokens in the story.
111
+ - `document.start` - First 3 tokens of the story. Used for verifying the story hasn't been modified.
112
+ - `document.end` - Last 3 tokens of the story. Used for verifying the story hasn't been modified.
113
+ - `document.summary.text` - Text of the wikipedia summary of the story.
114
+ - `document.summary.tokens` - Tokenized version of `document.summary.text`.
115
+ - `document.summary.url` - Wikipedia URL of the summary.
116
+ - `document.summary.title` - Wikipedia Title of the summary.
117
+ - `question` - `{"text":"...", "tokens":[...]}` for the question about the story.
118
+ - `answers` - List of `{"text":"...", "tokens":[...]}` for valid answers for the question.
119
+
120
+ ### Data Splits
121
+
122
+ The data is split into training, valiudation, and test sets based on story (i.e. the same story cannot appear in more than one split):
123
+
124
+ | Train | Valid | Test |
125
+ | ------ | ----- | ----- |
126
+ | 32747 | 3461 | 10557 |
127
+
128
+ ## Dataset Creation
129
+
130
+ ### Curation Rationale
131
+
132
+ [More Information Needed]
133
+
134
+ ### Source Data
135
+
136
+ #### Initial Data Collection and Normalization
137
+ Stories and movies scripts were downloaded from [Project Gutenburg](https://www.gutenberg.org) and a range of movie script repositories (mainly [imsdb](http://www.imsdb.com)).
138
+
139
+ #### Who are the source language producers?
140
+
141
+ The language producers are authors of the stories and scripts as well as Amazon Turk workers for the questions.
142
+
143
+ ### Annotations
144
+
145
+ #### Annotation process
146
+
147
+ Amazon Turk Workers were provided with human written summaries of the stories (To make the annotation tractable and to lead annotators towards asking non-localized questions). Stories were matched with plot summaries from Wikipedia using titles and verified the matching with help from human annotators. The annotators were asked to determine if both the story and the summary refer to a movie or a book (as some books are made into movies), or if they are the same part in a series produced in the same year. Annotators on Amazon Mechanical Turk were instructed to write 10 question–answer pairs each based solely on a given summary. Annotators were instructed to imagine that they are writing questions to test students who have read the full stories but not the summaries. We required questions that are specific enough, given the length and complexity of the narratives, and to provide adiverse set of questions about characters, events, why this happened, and so on. Annotators were encouraged to use their own words and we prevented them from copying. We asked for answers that are grammatical, complete sentences, and explicitly allowed short answers (one word, or a few-word phrase, or ashort sentence) as we think that answering with a full sentence is frequently perceived as artificial when asking about factual information. Annotators were asked to avoid extra, unnecessary information in the question or the answer, and to avoid yes/no questions or questions about the author or the actors.
148
+
149
+ #### Who are the annotators?
150
+
151
+ Amazon Mechanical Turk workers.
152
+
153
+ ### Personal and Sensitive Information
154
+
155
+ None
156
+
157
+ ## Considerations for Using the Data
158
+
159
+ ### Social Impact of Dataset
160
+
161
+ [More Information Needed]
162
+
163
+ ### Discussion of Biases
164
+
165
+ [More Information Needed]
166
+
167
+ ### Other Known Limitations
168
+
169
+ [More Information Needed]
170
+
171
+ ## Additional Information
172
+
173
+ ### Dataset Curators
174
+
175
+ [More Information Needed]
176
+
177
+ ### Licensing Information
178
+
179
+ The dataset is released under a [Apache-2.0 License](https://github.com/deepmind/narrativeqa/blob/master/LICENSE).
180
+
181
+ ### Citation Information
182
+
183
+ ```
184
+ @article{narrativeqa,
185
+ author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
186
+ Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
187
+ Edward Grefenstette},
188
+ title = {The {NarrativeQA} Reading Comprehension Challenge},
189
+ journal = {Transactions of the Association for Computational Linguistics},
190
+ url = {https://TBD},
191
+ volume = {TBD},
192
+ year = {2018},
193
+ pages = {TBD},
194
+ }
195
+ ```
196
+
197
+ ### Contributions
198
+
199
+ Thanks to [@rsanjaykamath](https://github.com/rsanjaykamath) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The Narrative QA Manual dataset is a reading comprehension dataset, in which the reader must answer questions about stories by reading entire books or movie scripts. The QA tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.\\\nTHIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! Because of a script in the original repository which downloads the stories from original URLs everytime, The links are sometimes broken or invalid. Therefore, you need to manually download the stories for this dataset using the script provided by the authors (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named \"tmp\" in the root directory and downloads the stories there. This folder containing the storiescan be used to load the dataset via `datasets.load_dataset(\"narrativeqa_manual\", data_dir=\"<path/to/folder>\")`. ", "citation": "@article{kovcisky2018narrativeqa,\n title={The narrativeqa reading comprehension challenge},\n author={Ko{\u000b{c}}isk{'y}, Tom{'a}{\u000b{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{'a}bor and Grefenstette, Edward},\n journal={Transactions of the Association for Computational Linguistics},\n volume={6},\n pages={317--328},\n year={2018},\n publisher={MIT Press}\n}\n", "homepage": "https://deepmind.com/research/publications/narrativeqa-reading-comprehension-challenge", "license": "https://github.com/deepmind/narrativeqa/blob/master/LICENSE", "features": {"document": {"id": {"dtype": "string", "id": null, "_type": "Value"}, "kind": {"dtype": "string", "id": null, "_type": "Value"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "file_size": {"dtype": "int32", "id": null, "_type": "Value"}, "word_count": {"dtype": "int32", "id": null, "_type": "Value"}, "start": {"dtype": "string", "id": null, "_type": "Value"}, "end": {"dtype": "string", "id": null, "_type": "Value"}, "summary": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "url": {"dtype": "string", "id": null, "_type": "Value"}, "title": {"dtype": "string", "id": null, "_type": "Value"}}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "question": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}, "answers": [{"text": {"dtype": "string", "id": null, "_type": "Value"}, "tokens": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}}]}, "post_processed": null, "supervised_keys": null, "builder_name": "narrativeqa_manual", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 9115940054, "num_examples": 32747, "dataset_name": "narrativeqa_manual"}, "test": {"name": "test", "num_bytes": 2911702563, "num_examples": 10557, "dataset_name": "narrativeqa_manual"}, "validation": {"name": "validation", "num_bytes": 968994186, "num_examples": 3461, "dataset_name": "narrativeqa_manual"}}, "download_checksums": {"https://raw.githubusercontent.com/deepmind/narrativeqa/master/documents.csv": {"num_bytes": 341683, "checksum": "6dffa4cc0b5c9963fe3a097c87f04aa7767da36627500cc7a0d69e2405e1144a"}, "https://raw.githubusercontent.com/deepmind/narrativeqa/master/third_party/wikipedia/summaries.csv": {"num_bytes": 10821085, "checksum": "87d12b849219015c4fe717e95c707671d802c181145e8ba2acc11dff23ea7c75"}, "https://raw.githubusercontent.com/deepmind/narrativeqa/master/qaps.csv": {"num_bytes": 11475505, "checksum": "990b02af0b5280de210f0e6b80f43b3fab80dc6de630c4d5059a1b7131c26e38"}}, "download_size": 22638273, "post_processing_size": null, "dataset_size": 12996636803, "size_in_bytes": 13019275076}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23516fde7bc6e3cdba094b1e0d339d1836818f8adec6335a27c8e0ee4f498167
3
+ size 4698
narrativeqa_manual.py ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """NarrativeQA Reading Comprehension Challenge"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import os
21
+ from os import listdir
22
+ from os.path import isfile, join
23
+
24
+ import datasets
25
+
26
+
27
+ _CITATION = """\
28
+ @article{kovcisky2018narrativeqa,
29
+ title={The narrativeqa reading comprehension challenge},
30
+ author={Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and Schwarz, Jonathan and Blunsom, Phil and Dyer, Chris and Hermann, Karl Moritz and Melis, G{\'a}bor and Grefenstette, Edward},
31
+ journal={Transactions of the Association for Computational Linguistics},
32
+ volume={6},
33
+ pages={317--328},
34
+ year={2018},
35
+ publisher={MIT Press}
36
+ }
37
+ """
38
+
39
+
40
+ _DESCRIPTION = """\
41
+ The Narrative QA Manual dataset is a reading comprehension \
42
+ dataset, in which the reader must answer questions about stories \
43
+ by reading entire books or movie scripts. \
44
+ The QA tasks are designed so that successfully answering their questions \
45
+ requires understanding the underlying narrative rather than \
46
+ relying on shallow pattern matching or salience.\\
47
+ THIS DATASET REQUIRES A MANUALLY DOWNLOADED FILE! \
48
+ Because of a script in the original repository which downloads the stories from original URLs everytime, \
49
+ The links are sometimes broken or invalid. \
50
+ Therefore, you need to manually download the stories for this dataset using the script provided by the authors \
51
+ (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp" \
52
+ in the root directory and downloads the stories there. This folder containing the stories\
53
+ can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")`. """
54
+
55
+
56
+ _HOMEPAGE = "https://deepmind.com/research/publications/narrativeqa-reading-comprehension-challenge"
57
+ _LICENSE = "https://github.com/deepmind/narrativeqa/blob/master/LICENSE"
58
+
59
+
60
+ # The HuggingFace dataset library don't host the datasets but only point to the original files
61
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
62
+ _URL = "https://github.com/deepmind/narrativeqa"
63
+ _URLS = {
64
+ "documents": "https://raw.githubusercontent.com/deepmind/narrativeqa/master/documents.csv",
65
+ "summaries": "https://raw.githubusercontent.com/deepmind/narrativeqa/master/third_party/wikipedia/summaries.csv",
66
+ "qaps": "https://raw.githubusercontent.com/deepmind/narrativeqa/master/qaps.csv",
67
+ }
68
+
69
+
70
+ class NarrativeqaManual(datasets.GeneratorBasedBuilder):
71
+ """The NarrativeQA Manual dataset"""
72
+
73
+ VERSION = datasets.Version("1.0.0")
74
+
75
+ @property
76
+ def manual_download_instructions(self):
77
+ return """ You need to manually download the stories for this dataset using the script provided by the authors \
78
+ (https://github.com/deepmind/narrativeqa/blob/master/download_stories.sh). Running the shell script creates a folder named "tmp"\
79
+ in the root directory and downloads the stories there. This folder containing the stories\
80
+ can be used to load the dataset via `datasets.load_dataset("narrativeqa_manual", data_dir="<path/to/folder>")."""
81
+
82
+ def _info(self):
83
+
84
+ return datasets.DatasetInfo(
85
+ description=_DESCRIPTION,
86
+ features=datasets.Features(
87
+ {
88
+ "document": {
89
+ "id": datasets.Value("string"),
90
+ "kind": datasets.Value("string"),
91
+ "url": datasets.Value("string"),
92
+ "file_size": datasets.Value("int32"),
93
+ "word_count": datasets.Value("int32"),
94
+ "start": datasets.Value("string"),
95
+ "end": datasets.Value("string"),
96
+ "summary": {
97
+ "text": datasets.Value("string"),
98
+ "tokens": datasets.features.Sequence(datasets.Value("string")),
99
+ "url": datasets.Value("string"),
100
+ "title": datasets.Value("string"),
101
+ },
102
+ "text": datasets.Value("string"),
103
+ },
104
+ "question": {
105
+ "text": datasets.Value("string"),
106
+ "tokens": datasets.features.Sequence(datasets.Value("string")),
107
+ },
108
+ "answers": [
109
+ {
110
+ "text": datasets.Value("string"),
111
+ "tokens": datasets.features.Sequence(datasets.Value("string")),
112
+ }
113
+ ],
114
+ }
115
+ ),
116
+ supervised_keys=None,
117
+ homepage=_HOMEPAGE,
118
+ license=_LICENSE,
119
+ citation=_CITATION,
120
+ )
121
+
122
+ def _split_generators(self, dl_manager):
123
+ """Returns SplitGenerators."""
124
+ data_dir = dl_manager.download_and_extract(_URLS)
125
+ manual_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
126
+
127
+ if not os.path.exists(manual_dir):
128
+ raise FileNotFoundError(
129
+ "{} does not exist. Make sure you insert a manual dir via `datasets.load_dataset('narrativeqa_manual', data_dir=...)` that includes the stories downloaded from the original repository. Manual download instructions: {}".format(
130
+ manual_dir, self.manual_download_instructions
131
+ )
132
+ )
133
+
134
+ return [
135
+ datasets.SplitGenerator(
136
+ name=datasets.Split.TRAIN,
137
+ gen_kwargs={
138
+ "data_dir": data_dir,
139
+ "manual_dir": manual_dir,
140
+ "split": "train",
141
+ },
142
+ ),
143
+ datasets.SplitGenerator(
144
+ name=datasets.Split.TEST,
145
+ gen_kwargs={
146
+ "data_dir": data_dir,
147
+ "manual_dir": manual_dir,
148
+ "split": "test",
149
+ },
150
+ ),
151
+ datasets.SplitGenerator(
152
+ name=datasets.Split.VALIDATION,
153
+ gen_kwargs={
154
+ "data_dir": data_dir,
155
+ "manual_dir": manual_dir,
156
+ "split": "valid",
157
+ },
158
+ ),
159
+ ]
160
+
161
+ def _generate_examples(self, data_dir, manual_dir, split):
162
+ """ Yields examples. """
163
+
164
+ documents = {}
165
+ with open(data_dir["documents"], encoding="utf-8") as f:
166
+ reader = csv.DictReader(f)
167
+ for row in reader:
168
+ if row["set"] != split:
169
+ continue
170
+ documents[row["document_id"]] = row
171
+
172
+ summaries = {}
173
+ with open(data_dir["summaries"], encoding="utf-8") as f:
174
+ reader = csv.DictReader(f)
175
+ for row in reader:
176
+ if row["set"] != split:
177
+ continue
178
+ summaries[row["document_id"]] = row
179
+
180
+ onlyfiles = [f for f in listdir(manual_dir) if isfile(join(manual_dir, f))]
181
+ story_texts = {}
182
+ for i in onlyfiles:
183
+ if "content" in i:
184
+ with open(os.path.join(manual_dir, i), "r", encoding="utf-8", errors="ignore") as f:
185
+ text = f.read()
186
+ story_texts[i.split(".")[0]] = text
187
+
188
+ with open(data_dir["qaps"], encoding="utf-8") as f:
189
+ reader = csv.DictReader(f)
190
+ for id_, row in enumerate(reader):
191
+ if row["set"] != split:
192
+ continue
193
+ document_id = row["document_id"]
194
+ document = documents[document_id]
195
+ summary = summaries[document_id]
196
+ full_text = story_texts[document_id]
197
+ res = {
198
+ "document": {
199
+ "id": document["document_id"],
200
+ "kind": document["kind"],
201
+ "url": document["story_url"],
202
+ "file_size": document["story_file_size"],
203
+ "word_count": document["story_word_count"],
204
+ "start": document["story_start"],
205
+ "end": document["story_end"],
206
+ "summary": {
207
+ "text": summary["summary"],
208
+ "tokens": summary["summary_tokenized"].split(),
209
+ "url": document["wiki_url"],
210
+ "title": document["wiki_title"],
211
+ },
212
+ "text": full_text,
213
+ },
214
+ "question": {"text": row["question"], "tokens": row["question_tokenized"].split()},
215
+ "answers": [
216
+ {"text": row["answer1"], "tokens": row["answer1_tokenized"].split()},
217
+ {"text": row["answer2"], "tokens": row["answer2_tokenized"].split()},
218
+ ],
219
+ }
220
+ yield id_, res