DataHammer commited on
Commit
481d49b
1 Parent(s): fb899be

update README

Browse files
Files changed (1) hide show
  1. README.md +42 -5
README.md CHANGED
@@ -1,10 +1,47 @@
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
- - question-answering
5
- - text-generation
6
  language:
7
- - en
8
  size_categories:
9
- - 10K<n<100K
10
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
+ - question-answering
5
+ - text-generation
6
  language:
7
+ - en
8
  size_categories:
9
+ - 10K<n<100K
10
+ ---
11
+
12
+ # Scientific Emotional Dialogue
13
+
14
+ ## Dataset Description
15
+
16
+ - **Homepage:**
17
+ - **Repository:**
18
+ - **Paper:**
19
+ - **Leaderboard:**
20
+ - **Point of Contact:**
21
+
22
+ ### Dataset Summary
23
+
24
+ This is a dataset for question answering on scientific research papers. It consists of 21.297 questions-answer-evidence pairs.
25
+
26
+ ### Supported Tasks and Leaderboards
27
+
28
+ - question-answering: The dataset can be used to train a model for Scientific Question Answering. Success on this task is typically measured by achieving a high F1 score.
29
+
30
+ ### Languages
31
+
32
+ English
33
+
34
+ ## Dataset Structure
35
+
36
+ ### Data Instances
37
+
38
+ A typical instance in the dataset:
39
+
40
+ ```
41
+ {
42
+ "question": "What aim do the authors have by improving Wiki(GOLD) results?",
43
+ "answer": "The aim is not to tune their model specifically on this class hierarchy. They instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
44
+ "evidence": "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)\nIt is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.",
45
+ "yes_no": false
46
+ }
47
+ ```