Datasets:
annotations_creators:
- expert-created
language_creators:
- unknown
language:
- en
license:
- unknown
multilinguality:
- unknown
size_categories:
- unknown
source_datasets:
- original
task_categories:
- other
task_ids: []
pretty_name: FairytaleQA
tags:
- question-generation
Dataset Card for GEM/FairytaleQA
Dataset Description
- Homepage: [Needs More Information]
- Repository: https://github.com/uci-soe/FairytaleQAData
- Paper: https://arxiv.org/abs/2203.13947
- Leaderboard: https://paperswithcode.com/sota/question-generation-on-fairytaleqa
- Point of Contact: Ying Xu, Dakuo Wang
Link to Main Data Card
You can find the main data card on the GEM Website.
Dataset Summary
The FairytaleQA Dataset is an English-language dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Generated by educational experts based on an evidence-based theoretical framework, FairytaleQA consists of 10,580 explicit and implicit questions derived from 278 children-friendly stories, covering seven types of narrative elements or relations. The Dataset was corrected to support both the tasks of Question Generation and Question Answering.
You can load the dataset via:
import datasets
data = datasets.load_dataset('GEM/FairytaleQA')
The data loader can be found here.
paper
authors
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
Dataset Overview
Where to find the Data and its Documentation
Download
Paper
BibTex
@inproceedings{xu2022fairytaleqa, author={Xu, Ying and Wang, Dakuo and Yu, Mo and Ritchie, Daniel and Yao, Bingsheng and Wu, Tongshuang and Zhang, Zheng and Li, Toby Jia-Jun and Bradford, Nora and Sun, Branda and Hoang, Tran Bao and Sang, Yisi and Hou, Yufang and Ma, Xiaojuan and Yang, Diyi and Peng, Nanyun and Yu, Zhou and Warschauer, Mark}, title = {Fantastic Questions and Where to Find Them: Fairytale{QA} -- An Authentic Dataset for Narrative Comprehension}, publisher = {Association for Computational Linguistics}, year = {2022} }
Contact Name
Ying Xu, Dakuo Wang
Contact Email
ying.xu@uci.edu, dakuo.wang@ibm.com
Has a Leaderboard?
yes
Leaderboard Link
Leaderboard Details
The task was to generate questions corresponding to the given answers and the story context. Success on the Question Generation task is typically measured by achieving a high ROUGE-L score to the reference ground-truth question.
Languages and Intended Use
Multilingual?
no
Covered Dialects
[N/A]
Covered Languages
English
Whose Language?
[N/A]
License
unknown: License information unavailable
Intended Use
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain. The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
This dataset is suitable for developing models to automatically generate questions and QA-Pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
Primary Task
Question Generation
Communicative Goal
The task was to generate questions corresponding to the given answers and the story context. Models trained for this task can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
Credit
Curation Organization Type(s)
academic
Curation Organization(s)
University of California Irvine
Dataset Creators
Ying Xu (University of California Irvine); Dakuo Wang (IBM Research); Mo Yu (IBM Research); Daniel Ritchie (University of California Irvine); Bingsheng Yao (Rensselaer Polytechnic Institute); Tongshuang Wu (University of Washington); Zheng Zhang (University of Notre Dame); Toby Jia-Jun Li (University of Notre Dame); Nora Bradford (University of California Irvine); Branda Sun (University of California Irvine); Tran Bao Hoang (University of California Irvine); Yisi Sang (Syracuse University); Yufang Hou (IBM Research Ireland); Xiaojuan Ma (Hong Kong Univ. of Sci and Tech); Diyi Yang (Georgia Institute of Technology); Nanyun Peng (University of California Los Angeles); Zhou Yu (Columbia University); Mark Warschauer (University of California Irvine)
Funding
Schmidt Futures
Who added the Dataset to GEM?
Dakuo Wang (IBM Research); Bingsheng Yao (Rensselaer Polytechnic Institute); Ying Xu (University of California Irvine)
Dataset Structure
Data Fields
story_name
: a string of the story name to which the story section content belongs. Full story data can be found here.content
: a string of the story section(s) content related to the experts' labeled QA-pair. Used as the input for both Question Generation and Question Answering tasks.question
: a string of the question content. Used as the input for Question Answering task and as the output for Question Generation task.answer
: a string of the answer content for all splits. Used as the input for Question Generation task and as the output for Question Answering task.gem_id
: a string of id follows GEM naming conventionGEM-${DATASET_NAME}-${SPLIT-NAME}-${id}
where id is an incrementing number starting at 1target
: a string of the question content being used for trainingreferences
: a list of string containing the question content being used for automatic evallocal_or_sum
: a string of either local or summary, indicating whether the QA is related to one story section or multiple sectionsattribute
: a string of one of character, causal relationship, action, setting, feeling, prediction, or outcome resolution. Classification of the QA by education experts annotators via 7 narrative elements on an established frameworkex_or_im
: a string of either explicit or implicit, indicating whether the answers can be directly found in the story content or cannot be directly from the story content.
Reason for Structure
[N/A]
How were labels chosen?
A typical data point comprises a question, the corresponding story content, and one answer. Education expert annotators labeled whether the answer is locally relevant to one story section or requires summarization capabilities from multiple story sections, and whether the answers are explicit (can be directly found in the stories) or implicit (cannot be directly found in the story text). Additionally, education expert annotators categorize the QA-pairs via 7 narrative elements from an establish framework.
Example Instance
{'story_name': 'self-did-it', 'content': '" what is your name ? " asked the girl from underground . " self is my name , " said the woman . that seemed a curious name to the girl , and she once more began to pull the fire apart . then the woman grew angry and began to scold , and built it all up again . thus they went on for a good while ; but at last , while they were in the midst of their pulling apart and building up of the fire , the woman upset the tar - barrel on the girl from underground . then the latter screamed and ran away , crying : " father , father ! self burned me ! " " nonsense , if self did it , then self must suffer for it ! " came the answer from below the hill .', 'answer': 'the woman told the girl her name was self .', 'question': "why did the girl's father think the girl burned herself ?", 'gem_id': 'GEM-FairytaleQA-test-1006', 'target': "why did the girl's father think the girl burned herself ?", 'references': ["why did the girl's father think the girl burned herself ?"], 'local_or_sum': 'local', 'attribute': 'causal relationship', 'ex_or_im': 'implicit'}
Data Splits
The data is split into a train, validation, and test split randomly. The final split sizes are as follows:
Train | Validation | Test | |
---|---|---|---|
# Books | 232 | 23 | 23 |
# QA-Pairs | 8548 | 1025 | 1007 |
Splitting Criteria
The books are randomly split into train/validation/test splits. We control the ratio of QA-pair numbers in train:validation:test splits close to 8:1:1
[N/A]
Dataset in GEM
Rationale for Inclusion in GEM
Why is the Dataset in GEM?
The dataset distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
Similar Datasets
no
Ability that the Dataset measures
This dataset is suitable for developing models to automatically generate questions or QA-pairs that satisfy the need for a continuous supply of new questions, which can potentially enable large-scale development of AI-supported interactive platforms for the learning and assessment of reading comprehension skills.
GEM-Specific Curation
Modificatied for GEM?
yes
GEM Modifications
data points removed
Modification Details
The original data contains two answers by different annotators in validation/test splits, we removed the 2nd answer for GEM version because it is not being used for the Question Generation task.
Additional Splits?
no
Getting Started with the Task
Pointers to Resources
[N/A]
Previous Results
Previous Results
Measured Model Abilities
We are able to measure model's capabilities of generating various types of questions that corresponds to different narrative elements with the FairytaleQA dataset on the Question Generation Task
Metrics
ROUGE
Proposed Evaluation
The task was to generate questions corresponding to the given answers and the story context. Success on this task is typically measured by achieving a high ROUGE score to the reference ground-truth questions.
Previous results available?
yes
Relevant Previous Results
A BART-based model currently achieves a ROUGE-L of 0.527/0.527 on valid/test splits, which is reported as the baseline experiment for the dataset paper.
Dataset Curation
Original Curation
Original Curation Rationale
FairytaleQA was built to focus on comprehension of narratives in the education domain, targeting students from kindergarten to eighth grade. We focus on narrative comprehension for 1. it is a high-level comprehension skill strongly predictive of reading achievement and plays a central role in daily life as people frequently encounter narratives in different forms, 2. narrative stories have a clear structure of specific elements and relations among these elements, and there are existing validated narrative comprehension frameworks around this structure, which provides a basis for developing the annotation schema for our dataset.
Communicative Goal
The purpose of this dataset is to help develop systems to facilitate assessment and training of narrative comprehension skills for children in education domain.
Sourced from Different Sources
no
Language Data
How was Language Data Obtained?
Found
Where was it found?
Single website
Language Producers
The fairytale story texts are from the Project Gutenberg website
Topics Covered
We gathered the text from the Project Gutenberg website, using “fairytale” as the search term.
Data Validation
validated by data curator
Data Preprocessing
Due to a large number of fairytales found, we used the most popular stories based on the number of downloads since these stories are presumably of higher quality. To ensure the readability of the text, we made a small number of minor revisions to some obviously outdated vocabulary (e.g., changing “ere” to “before”) and the unconventional use of punctuation (e.g., changing consecutive semi-colons to periods).
These texts were broken down into small sections based on their semantic content by our annotators. The annotators were instructed to split the story into sections of 100-300 words that also contain meaningful content and are separated at natural story breaks. An initial annotator would split the story, and this would be reviewed by a cross-checking annotator. Most of the resulting sections were one natural paragraph of the original text.
Was Data Filtered?
manually
Filter Criteria
For each story, we evaluated the reading difficulty level using the textstat Python package, primarily based on sentence length, word length, and commonness of words. We excluded stories that are at 10th grade level or above.
Structured Annotations
Additional Annotations?
expert created
Number of Raters
2<n<10
Rater Qualifications
All of these annotators have a B.A. degree in education, psychology, or cognitive science and have substantial experience in teaching and reading assessment. These annotators were supervised by three experts in literacy education.
Raters per Training Example
2
Raters per Test Example
3
Annotation Service?
no
Annotation Values
The dataset annotation distinguishes fine-grained reading skills, such as the understanding of varying narrative elements, and contains high-quality QA-pairs generated by education experts with sufficient training and education domain knowledge to create valid QA-pairs in a consistent way.
Any Quality Control?
validated by data curators
Quality Control Details
The annotators were instructed to imagine that they were creating questions to test elementary or middle school students in the process of reading a complete story. We required the annotators to generate only natural, open-ended questions, avoiding “yes-” or “no-” questions. We also instructed them to provide a diverse set of questions about 7 different narrative elements, and with both implicit and explicit questions.
We asked the annotators to also generate answers for each of their questions. We asked them to provide the shortest possible answers but did not restrict them to complete sentences or short phrases. We also asked the annotators to label which section(s) the question and answer was from.
All annotators received a two-week training in which each of them was familiarized with the coding template and conducted practice coding on the same five stories. The practice QA pairs were then reviewed by the other annotators and the three experts, and discrepancies among annotators were discussed. During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
For the 46 stories used as the evaluation set, we annotate a second reference answer by asking an annotator to independently read the story and answer the questions generated by others.
Consent
Any Consent Policy?
yes
Consent Policy Details
During the annotation process, the team met once every week to review and discuss each member’s work. All QA pairs were cross-checked by two annotators, and 10% of the QA pairs were additionally checked by the expert supervisor.
Other Consented Downstream Use
Aside from Question Generation task, the data creators and curators used this data for Question Answering, and QA-Pair Generation tasks, and to identify social stereotypes represented in story narratives.
Private Identifying Information (PII)
Contains PII?
no PII
Justification for no PII
The story content is from publically available knowledge website and the annotated QA-pairs are about general knowledge to the story content without references to the author or to any persons
Maintenance
Any Maintenance Plan?
yes
Maintenance Plan Details
We plan to host various splits for the FairytaleQA dataset to better serve various types of research interests. We have the original data for 2 different split approaches including train/validation/test splits and split by fairytale origins. We are also plan to host the dataset on multiple platforms for various tasks.
Maintainer Contact Information
Daniel Ritchie
Any Contestation Mechanism?
no mechanism
Broader Social Context
Previous Work on the Social Impact of the Dataset
Usage of Models based on the Data
yes - models trained on this dataset
Social Impact Observations
[N/A]
Changes as Consequence of Social Impact
[N/A]
Impact on Under-Served Communities
Addresses needs of underserved Communities?
yes
Details on how Dataset Addresses the Needs
From the educational perspective, given that reading comprehension is a multicomponent skill, it is ideal for comprehension questions to be able to identify students’ performance in specific sub-skills, thus allowing teachers to provide tailored guidance.
Discussion of Biases
Any Documented Social Biases?
unsure
Are the Language Producers Representative of the Language?
[N/A]
Considerations for Using the Data
PII Risks and Liability
Potential PII Risk
[N/A]
Licenses
Copyright Restrictions on the Dataset
research use only
Copyright Restrictions on the Language Data
public domain
Known Technical Limitations
Technical Limitations
We noticed that human results are obtained via cross-estimation between the two annotated answers, thus are underestimated. One possibility for future work is to conduct a large-scale human annotation to collect more answers per question and then leverage the massively annotated answers to better establish a human performance evaluation.
Unsuited Applications
The QA-pairs annotated by education experts are targeting the audience of children from kindergarten to eighth grade, so the difficulty of QA-pairs are not suitable to compare with other existing dataset that are sourced from knowledge graphs or knowledge bases like Wikipedia.
Discouraged Use Cases
[N/A]