scimrc / README.md
DataHammer's picture
update README
481d49b
---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- en
size_categories:
- 10K<n<100K
---
# Scientific Emotional Dialogue
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a dataset for question answering on scientific research papers. It consists of 21.297 questions-answer-evidence pairs.
### Supported Tasks and Leaderboards
- question-answering: The dataset can be used to train a model for Scientific Question Answering. Success on this task is typically measured by achieving a high F1 score.
### Languages
English
## Dataset Structure
### Data Instances
A typical instance in the dataset:
```
{
"question": "What aim do the authors have by improving Wiki(GOLD) results?",
"answer": "The aim is not to tune their model specifically on this class hierarchy. They instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
"evidence": "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)\nIt is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
"yes_no": false
}
```