saattrupdan commited on
Commit
30f6ac6
1 Parent(s): bf50f5a

docs: Add readme

Browse files
Files changed (1) hide show
  1. README.md +144 -1
README.md CHANGED
@@ -1,3 +1,146 @@
1
  ---
2
- license: cc-by-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: ScandiQA
3
+ language:
4
+ - da
5
+ - sv
6
+ - no
7
+ license:
8
+ - cc-by-sa-4.0
9
+ multilinguality:
10
+ - multilingual
11
+ size_categories:
12
+ - 1K<n<10K
13
+ source_datasets:
14
+ - mkqa|natural_questions
15
+ task_categories:
16
+ - question-answering
17
+ task_ids:
18
+ - extractive-qa
19
  ---
20
+
21
+ # Dataset Card for ScandiQA
22
+
23
+ ## Dataset Description
24
+
25
+ - **Repository:** <https://github.com/alexandrainst/scandi-qa>
26
+ - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
27
+ - **Size of downloaded dataset files:** 69 MB
28
+ - **Size of the generated dataset:** 67 MB
29
+ - **Total amount of disk used:** 136 MB
30
+
31
+ ### Dataset Summary
32
+
33
+ ScandiQA is a dataset of questions and answers in the Danish, Norwegian, and Swedish
34
+ languages. All samples come from the Natural Questions (NQ) dataset, which is a large
35
+ question answering dataset from Google searches. The Scandinavian questions and answers
36
+ come from the MKQA dataset, where 10,000 NQ samples were manually translated into,
37
+ among others, Danish, Norwegian, and Swedish. However, this did not include a
38
+ translated context, hindering the training of extractive question answering models.
39
+
40
+ We merged the NQ dataset with the MKQA dataset, and extracted contexts as either "long
41
+ answers" from the NQ dataset, being the paragraph in which the answer was found, or
42
+ otherwise we extract the context by locating the paragraphs which have the largest
43
+ cosine similarity to the question, and which contains the desired answer.
44
+
45
+ Further, many answers in the MKQA dataset were "language normalised": for instance, all
46
+ date answers were converted to the format "YYYY-MM-DD", meaning that in most cases
47
+ these answers are not appearing in any paragraphs. We solve this by extending the MKQA
48
+ answers with plausible "answer candidates", being slight perturbations or translations
49
+ of the answer.
50
+
51
+ With the contexts extracted, we translated these to Danish, Swedish and Norwegian using
52
+ the DeepL translation service for Danish and Swedish, and the Google Translation
53
+ service for Norwegian. After translation we ensured that the Scandinavian answers do
54
+ indeed occur in the translated contexts.
55
+
56
+ As we are filtering the MKQA samples at both the "merging stage" and the "translation
57
+ stage", we are not able to fully convert the 10,000 samples to the Scandinavian
58
+ languages, and instead get roughly 8,000 samples per language. These have further been
59
+ split into a training, validation and test split, with the former two containing
60
+ roughly 750 samples. The splits have been created in such a way that the proportion of
61
+ samples without an answer is roughly the same in each split.
62
+
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ Training machine learning models for extractive question answering is the intended task
67
+ for this dataset. No leaderboard is active at this point.
68
+
69
+
70
+ ### Languages
71
+
72
+ The dataset is available in Danish (`da`), Swedish (`sv`) and Norwegian (`no`).
73
+
74
+
75
+ ## Dataset Structure
76
+
77
+ ### Data Instances
78
+
79
+ - **Size of downloaded dataset files:** 69 MB
80
+ - **Size of the generated dataset:** 67 MB
81
+ - **Total amount of disk used:** 136 MB
82
+
83
+ An example from the `train` split of the `da` subset looks as follows.
84
+ ```
85
+ {
86
+ 'example_id': 123,
87
+ 'question': 'Er dette en test?',
88
+ 'answer': 'Dette er en test',
89
+ 'answer_start': 0,
90
+ 'context': 'Dette er en testkontekst.',
91
+ 'answer_en': 'This is a test',
92
+ 'answer_start_en': 0,
93
+ 'context_en': "This is a test",
94
+ 'title_en': 'Train test'
95
+ }
96
+ ```
97
+
98
+ ### Data Fields
99
+
100
+ The data fields are the same among all splits.
101
+
102
+ - `example_id`: an `int64` feature.
103
+ - `question`: a `string` feature.
104
+ - `answer`: a `string` feature.
105
+ - `answer_start`: an `int64` feature.
106
+ - `context`: a `string` feature.
107
+ - `answer_en`: a `string` feature.
108
+ - `answer_start_en`: an `int64` feature.
109
+ - `context_en`: a `string` feature.
110
+ - `title_en`: a `string` feature.
111
+
112
+ ### Data Splits
113
+
114
+ | name | train | validation | test |
115
+ |----------|------:|-----------:|-----:|
116
+ | da | 6311 | 749 | 750 |
117
+ | sv | 6299 | 750 | 749 |
118
+ | no | 6314 | 749 | 750 |
119
+
120
+
121
+ ## Dataset Creation
122
+
123
+ ### Curation Rationale
124
+
125
+ The Scandinavian languages does not have any gold standard question answering dataset.
126
+ This is not quite gold standard, but the fact both the questions and answers are all
127
+ manually translated, it is a solid silver standard dataset.
128
+
129
+ ### Source Data
130
+
131
+ The original data was collected from the [MKQA](https://github.com/apple/ml-mkqa/) and
132
+ [Natural Questions](https://ai.google.com/research/NaturalQuestions) datasets from
133
+ Apple and Google, respectively.
134
+
135
+
136
+ ## Additional Information
137
+
138
+ ### Dataset Curators
139
+
140
+ [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
141
+ Institute](https://alexandra.dk/) curated this dataset.
142
+
143
+ ### Licensing Information
144
+
145
+ The dataset is licensed under the [CC BY-SA 4.0
146
+ license](https://creativecommons.org/licenses/by-sa/4.0/).