liljao commited on
Commit
ea0acfa
1 Parent(s): c900d71

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -20
README.md CHANGED
@@ -47,31 +47,14 @@ NorQuAD is the first Norwegian question answering dataset for machine reading co
47
  ### Dataset Description
48
 
49
  <!-- Provide a longer summary of what this dataset is. -->
50
- The dataset was created in three stages: (i) selecting text passages, (ii) collecting question-answer
51
- pairs for those passages, and (iii) human validation of (a subset of) created question-answer pairs.
52
-
53
- #### Text selection
54
-
55
- Data was selected from openly available sources from Wikipedia and News data.
56
-
57
-
58
- #### Question-Answer Pairs
59
 
60
 
61
 
62
 
63
- #### Human validation
64
-
65
- In a separate stage, the annotators validated a subset of the NorQuAD dataset. In this phase each
66
- annotator replied to the questions created by the
67
- other annotator. We chose the question-answer
68
- pairs for validation at random. In total, 1378 questions from the set of question-answer pairs, were
69
- answered by validators.
70
-
71
-
72
- - **Curated by:** Annotators (see above).
73
  - **Funded by [optional]:** The UiO Teksthub initiative
74
- - **Shared by [optional]:** [More Information Needed]
75
  - **Language(s) (NLP):** Norwegian Bokmål
76
  - **License:** CC0-1.0
77
 
@@ -150,9 +133,33 @@ total of 4,752 question-answer pairs.
150
 
151
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
152
 
 
 
 
 
 
 
 
 
 
 
 
 
 
153
  The annotation guidelines provided to the annotators are available (here)[https://github.com/ltgoslo/NorQuAD/blob/main/guidelines.md].
154
  For annotation, we used the Haystack annotation tool which was designed for QA collection.
155
 
 
 
 
 
 
 
 
 
 
 
 
156
  [More Information Needed]
157
 
158
  #### Who are the annotators?
 
47
  ### Dataset Description
48
 
49
  <!-- Provide a longer summary of what this dataset is. -->
50
+ The dataset provides Norwegian question-answer pairs taken from two data sources: Wikipedia and news.
 
 
 
 
 
 
 
 
51
 
52
 
53
 
54
 
55
+ - **Curated by:** Human annotators (see above).
 
 
 
 
 
 
 
 
 
56
  - **Funded by [optional]:** The UiO Teksthub initiative
57
+ - **Shared by [optional]:** The Language Technology Group, University of Oslo
58
  - **Language(s) (NLP):** Norwegian Bokmål
59
  - **License:** CC0-1.0
60
 
 
133
 
134
  <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
135
 
136
+ The dataset was created in three stages: (i) selecting text passages, (ii) collecting question-answer
137
+ pairs for those passages, and (iii) human validation of (a subset of) created question-answer pairs.
138
+
139
+ #### Text selection
140
+
141
+ Data was selected from openly available sources from Wikipedia and News data, as described above.
142
+
143
+
144
+ #### Question-Answer Pairs
145
+
146
+ The annotators were provided with a set of initial instructions, largely based on those for similar datasets, in particular, the English SQuAD
147
+ dataset (Rajpurkar et al., 2016) and the GermanQuAD data (Moller et al., 2021). These instructions were subsequently refined following regular
148
+ meetings with the annotation team.
149
  The annotation guidelines provided to the annotators are available (here)[https://github.com/ltgoslo/NorQuAD/blob/main/guidelines.md].
150
  For annotation, we used the Haystack annotation tool which was designed for QA collection.
151
 
152
+
153
+ #### Human validation
154
+
155
+ In a separate stage, the annotators validated a subset of the NorQuAD dataset. In this phase each
156
+ annotator replied to the questions created by the
157
+ other annotator. We chose the question-answer
158
+ pairs for validation at random. In total, 1378 questions from the set of question-answer pairs, were
159
+ answered by validators.
160
+
161
+
162
+
163
  [More Information Needed]
164
 
165
  #### Who are the annotators?