Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
found
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
Tags:
License:
Files changed (2) hide show
  1. README.md +25 -14
  2. narrativeqa.py +22 -10
README.md CHANGED
@@ -103,10 +103,9 @@ dataset_info:
103
 
104
  ## Dataset Description
105
 
106
- - **Homepage:** [NarrativeQA Homepage](https://deepmind.com/research/open-source/narrativeqa)
107
- - **Repository:** [NarrativeQA Repo](https://github.com/deepmind/narrativeqa)
108
- - **Paper:** [The NarrativeQA Reading Comprehension Challenge](https://arxiv.org/pdf/1712.07040.pdf)
109
- - **Leaderboard:**
110
  - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
111
 
112
  ### Dataset Summary
@@ -237,16 +236,28 @@ The dataset is released under a [Apache-2.0 License](https://github.com/deepmind
237
  ### Citation Information
238
 
239
  ```
240
- @article{narrativeqa,
241
- author = {Tom\'a\v s Ko\v cisk\'y and Jonathan Schwarz and Phil Blunsom and
242
- Chris Dyer and Karl Moritz Hermann and G\'abor Melis and
243
- Edward Grefenstette},
244
- title = {The {NarrativeQA} Reading Comprehension Challenge},
245
- journal = {Transactions of the Association for Computational Linguistics},
246
- url = {https://TBD},
247
- volume = {TBD},
248
- year = {2018},
249
- pages = {TBD},
 
 
 
 
 
 
 
 
 
 
 
 
250
  }
251
  ```
252
 
 
103
 
104
  ## Dataset Description
105
 
106
+ - **Repository:** https://github.com/deepmind/narrativeqa
107
+ - **Paper:** https://arxiv.org/abs/1712.07040
108
+ - **Paper:** https://aclanthology.org/Q18-1023/
 
109
  - **Point of Contact:** [Tomáš Kočiský](mailto:tkocisky@google.com) [Jonathan Schwarz](mailto:schwarzjn@google.com) [Phil Blunsom](pblunsom@google.com) [Chris Dyer](cdyer@google.com) [Karl Moritz Hermann](mailto:kmh@google.com) [Gábor Melis](mailto:melisgl@google.com) [Edward Grefenstette](mailto:etg@google.com)
110
 
111
  ### Dataset Summary
 
236
  ### Citation Information
237
 
238
  ```
239
+ @article{kocisky-etal-2018-narrativeqa,
240
+ title = "The {N}arrative{QA} Reading Comprehension Challenge",
241
+ author = "Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and
242
+ Schwarz, Jonathan and
243
+ Blunsom, Phil and
244
+ Dyer, Chris and
245
+ Hermann, Karl Moritz and
246
+ Melis, G{\'a}bor and
247
+ Grefenstette, Edward",
248
+ editor = "Lee, Lillian and
249
+ Johnson, Mark and
250
+ Toutanova, Kristina and
251
+ Roark, Brian",
252
+ journal = "Transactions of the Association for Computational Linguistics",
253
+ volume = "6",
254
+ year = "2018",
255
+ address = "Cambridge, MA",
256
+ publisher = "MIT Press",
257
+ url = "https://aclanthology.org/Q18-1023",
258
+ doi = "10.1162/tacl_a_00023",
259
+ pages = "317--328",
260
+ abstract = "Reading comprehension (RC){---}in contrast to information retrieval{---}requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
261
  }
262
  ```
263
 
narrativeqa.py CHANGED
@@ -22,16 +22,28 @@ import datasets
22
 
23
 
24
  _CITATION = """\
25
- @article{narrativeqa,
26
- author = {Tom\\'a\\v s Ko\\v cisk\\'y and Jonathan Schwarz and Phil Blunsom and
27
- Chris Dyer and Karl Moritz Hermann and G\\'abor Melis and
28
- Edward Grefenstette},
29
- title = {The {NarrativeQA} Reading Comprehension Challenge},
30
- journal = {Transactions of the Association for Computational Linguistics},
31
- url = {https://TBD},
32
- volume = {TBD},
33
- year = {2018},
34
- pages = {TBD},
 
 
 
 
 
 
 
 
 
 
 
 
35
  }
36
  """
37
 
 
22
 
23
 
24
  _CITATION = """\
25
+ @article{kocisky-etal-2018-narrativeqa,
26
+ title = "The {N}arrative{QA} Reading Comprehension Challenge",
27
+ author = "Ko{\v{c}}isk{\'y}, Tom{\'a}{\v{s}} and
28
+ Schwarz, Jonathan and
29
+ Blunsom, Phil and
30
+ Dyer, Chris and
31
+ Hermann, Karl Moritz and
32
+ Melis, G{\'a}bor and
33
+ Grefenstette, Edward",
34
+ editor = "Lee, Lillian and
35
+ Johnson, Mark and
36
+ Toutanova, Kristina and
37
+ Roark, Brian",
38
+ journal = "Transactions of the Association for Computational Linguistics",
39
+ volume = "6",
40
+ year = "2018",
41
+ address = "Cambridge, MA",
42
+ publisher = "MIT Press",
43
+ url = "https://aclanthology.org/Q18-1023",
44
+ doi = "10.1162/tacl_a_00023",
45
+ pages = "317--328",
46
+ abstract = "Reading comprehension (RC){---}in contrast to information retrieval{---}requires integrating information and reasoning about events, entities, and their relations across a full document. Question answering is conventionally used to assess RC ability, in both artificial agents and children learning to read. However, existing RC datasets and tasks are dominated by questions that can be solved by selecting answers using superficial information (e.g., local context similarity or global term frequency); they thus fail to test for the essential integrative aspect of RC. To encourage progress on deeper comprehension of language, we present a new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts. These tasks are designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience. We show that although humans solve the tasks easily, standard RC models struggle on the tasks presented here. We provide an analysis of the dataset and the challenges it presents.",
47
  }
48
  """
49