--- annotations_creators: - machine-generated language_creators: - found language: - en license: - gfdl multilinguality: - monolingual size_categories: - 100KAfter allocating books to either training, validation or test sets, we formed example ‘questions’ from chapters in the book by enumerating 21 consecutive sentences. In each question, the first 20 sentences form the context, and a word is removed from the 21st sentence, which becomes the query. Models must identify the answer word among a selection of 10 candidate answers appearing in the context sentences and the query. For finer-grained analyses, we evaluated four classes of question by removing distinct types of word: Named Entities, (Common) Nouns, Verbs and Prepositions. #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information ``` GNU Free Documentation License v1.3 ``` ### Citation Information ``` @misc{hill2016goldilocks, title={The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations}, author={Felix Hill and Antoine Bordes and Sumit Chopra and Jason Weston}, year={2016}, eprint={1511.02301}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@gchhablani](https://github.com/gchhablani) for adding this dataset.