Datasets:
Tasks:
Question Answering
Modalities:
Text
Sub-tasks:
closed-domain-qa
Languages:
English
Size:
1K - 10K
ArXiv:
License:
File size: 11,302 Bytes
3234a60 9834d7e de719a1 9834d7e d9c4f08 3234a60 5d20f54 35ded0c 3234a60 b377720 3234a60 b377720 3234a60 b377720 3234a60 b377720 3234a60 b377720 3234a60 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 |
---
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- en
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|s2orc
task_categories:
- question-answering
task_ids:
- closed-domain-qa
paperswithcode_id: qasper
pretty_name: QASPER
language_bcp47:
- en-US
dataset_info:
config_name: qasper
features:
- name: id
dtype: string
- name: title
dtype: string
- name: abstract
dtype: string
- name: full_text
sequence:
- name: section_name
dtype: string
- name: paragraphs
list: string
- name: qas
sequence:
- name: question
dtype: string
- name: question_id
dtype: string
- name: nlp_background
dtype: string
- name: topic_background
dtype: string
- name: paper_read
dtype: string
- name: search_query
dtype: string
- name: question_writer
dtype: string
- name: answers
sequence:
- name: answer
struct:
- name: unanswerable
dtype: bool
- name: extractive_spans
sequence: string
- name: yes_no
dtype: bool
- name: free_form_answer
dtype: string
- name: evidence
sequence: string
- name: highlighted_evidence
sequence: string
- name: annotation_id
dtype: string
- name: worker_id
dtype: string
- name: figures_and_tables
sequence:
- name: caption
dtype: string
- name: file
dtype: string
splits:
- name: train
num_bytes: 28466446
num_examples: 888
- name: validation
num_bytes: 9900193
num_examples: 281
- name: test
num_bytes: 15488891
num_examples: 416
download_size: 26199265
dataset_size: 53855530
configs:
- config_name: qasper
data_files:
- split: train
path: qasper/train-*
- split: validation
path: qasper/validation-*
- split: test
path: qasper/test-*
---
# Dataset Card for Qasper
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
- [Who are the source language producers?](#who-are-the-source-language-producers)
- [Annotations](#annotations)
- [Annotation process](#annotation-process)
- [Who are the annotators?](#who-are-the-annotators)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [https://allenai.org/data/qasper](https://allenai.org/data/qasper)
- **Demo:** [https://qasper-demo.apps.allenai.org/](https://qasper-demo.apps.allenai.org/)
- **Paper:** [https://arxiv.org/abs/2105.03011](https://arxiv.org/abs/2105.03011)
- **Blogpost:** [https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c](https://medium.com/ai2-blog/question-answering-on-scientific-research-papers-f6d6da9fd55c)
- **Leaderboards:** [https://paperswithcode.com/dataset/qasper](https://paperswithcode.com/dataset/qasper)
### Dataset Summary
QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language Processing papers. Each question is written by an NLP practitioner who read only the title and abstract of the corresponding paper, and the question seeks information present in the full text. The questions are then answered by a separate set of NLP practitioners who also provide supporting evidence to answers.
### Supported Tasks and Leaderboards
- `question-answering`: The dataset can be used to train a model for Question Answering. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 33.63 Token F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/question-answering-on-qasper)
- `evidence-selection`: The dataset can be used to train a model for Evidence Selection. Success on this task is typically measured by achieving a *high* [F1 score](https://huggingface.co/metrics/f1). The [official baseline model](https://github.com/allenai/qasper-led-baseline) currently achieves 39.37 F1 score & uses [Longformer](https://huggingface.co/transformers/model_doc/longformer.html). This task has an active leaderboard which can be found [here](https://paperswithcode.com/sota/evidence-selection-on-qasper)
### Languages
English, as it is used in research papers.
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
'id': "Paper ID (string)",
'title': "Paper Title",
'abstract': "paper abstract ...",
'full_text': {
'paragraphs':[["section1_paragraph1_text","section1_paragraph2_text",...],["section2_paragraph1_text","section2_paragraph2_text",...]],
'section_name':["section1_title","section2_title"],...},
'qas': {
'answers':[{
'annotation_id': ["q1_answer1_annotation_id","q1_answer2_annotation_id"]
'answer': [{
'unanswerable':False,
'extractive_spans':["q1_answer1_extractive_span1","q1_answer1_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer1",
'evidence':["q1_answer1_evidence1","q1_answer1_evidence2",..],
'highlighted_evidence':["q1_answer1_highlighted_evidence1","q1_answer1_highlighted_evidence2",..]
},
{
'unanswerable':False,
'extractive_spans':["q1_answer2_extractive_span1","q1_answer2_extractive_span2"],
'yes_no':False,
'free_form_answer':"q1_answer2",
'evidence':["q1_answer2_evidence1","q1_answer2_evidence2",..],
'highlighted_evidence':["q1_answer2_highlighted_evidence1","q1_answer2_highlighted_evidence2",..]
}],
'worker_id':["q1_answer1_worker_id","q1_answer2_worker_id"]
},{...["question2's answers"]..},{...["question3's answers"]..}],
'question':["question1","question2","question3"...],
'question_id':["question1_id","question2_id","question3_id"...],
'question_writer':["question1_writer_id","question2_writer_id","question3_writer_id"...],
'nlp_background':["question1_writer_nlp_background","question2_writer_nlp_background",...],
'topic_background':["question1_writer_topic_background","question2_writer_topic_background",...],
'paper_read': ["question1_writer_paper_read_status","question2_writer_paper_read_status",...],
'search_query':["question1_search_query","question2_search_query","question3_search_query"...],
}
}
```
### Data Fields
The following is an excerpt from the dataset README:
Within "qas", some fields should be obvious. Here is some explanation about the others:
#### Fields specific to questions:
- "nlp_background" shows the experience the question writer had. The values can be "zero" (no experience), "two" (0 - 2 years of experience), "five" (2 - 5 years of experience), and "infinity" (> 5 years of experience). The field may be empty as well, indicating the writer has chosen not to share this information.
- "topic_background" shows how familiar the question writer was with the topic of the paper. The values are "unfamiliar", "familiar", "research" (meaning that the topic is the research area of the writer), or null.
- "paper_read", when specified shows whether the questionwriter has read the paper.
- "search_query", if not empty, is the query the question writer used to find the abstract of the paper from a large pool of abstracts we made available to them.
#### Fields specific to answers
Unanswerable answers have "unanswerable" set to true. The remaining answers have exactly one of the following fields being non-empty.
- "extractive_spans" are spans in the paper which serve as the answer.
- "free_form_answer" is a written out answer.
- "yes_no" is true iff the answer is Yes, and false iff the answer is No.
"evidence" is the set of paragraphs, figures or tables used to arrive at the answer. Tables or figures start with the string "FLOAT SELECTED"
"highlighted_evidence" is the set of sentences the answer providers selected as evidence if they chose textual evidence. The text in the "evidence" field is a mapping from these sentences to the paragraph level. That is, if you see textual evidence in the "evidence" field, it is guaranteed to be entire paragraphs, while that is not the case with "highlighted_evidence".
### Data Splits
| | Train | Valid |
| ----- | ------ | ----- |
| Number of papers | 888 | 281 |
| Number of questions | 2593 | 1005 |
| Number of answers | 2675 | 1764 |
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
NLP papers: The full text of the papers is extracted from [S2ORC](https://huggingface.co/datasets/s2orc) (Lo et al., 2020)
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
[More Information Needed]
#### Annotation process
[More Information Needed]
#### Who are the annotators?
"The annotators are NLP practitioners, not
expert researchers, and it is likely that an expert
would score higher"
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
Crowdsourced NLP practitioners
### Licensing Information
[CC BY 4.0](https://creativecommons.org/licenses/by/4.0)
### Citation Information
```
@inproceedings{Dasigi2021ADO,
title={A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers},
author={Pradeep Dasigi and Kyle Lo and Iz Beltagy and Arman Cohan and Noah A. Smith and Matt Gardner},
year={2021}
}
```
### Contributions
Thanks to [@cceyda](https://github.com/cceyda) for adding this dataset.
|