Datasets:
ltg
/

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
vmkhlv commited on
Commit
a73121e
·
verified ·
1 Parent(s): f4a5570

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +119 -0
README.md CHANGED
@@ -53,4 +53,123 @@ configs:
53
  data_files:
54
  - split: train
55
  path: nn/train-*
 
 
 
 
 
 
 
 
 
56
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  data_files:
54
  - split: train
55
  path: nn/train-*
56
+ license: mit
57
+ task_categories:
58
+ - question-answering
59
+ language:
60
+ - nb
61
+ - nn
62
+ pretty_name: NorCommonSenseQA
63
+ size_categories:
64
+ - n<1K
65
  ---
66
+
67
+ # Dataset Card for NorCommonSenseQA
68
+
69
+ ## Dataset Details
70
+
71
+ ### Dataset Description
72
+
73
+ NorCommonSenseQA is a multiple-choice question answering (QA) dataset designed for zero-shot evaluation of language models' commonsense reasoning abilities. NorCommonSenseQA counts 1093 examples in both written standards of Norwegian: Bokmål and Nynorsk (the minority variant). Each example consists of a question and five answer choices.
74
+
75
+ NorCommonSenseQA is part of the collection of Norwegian QA datasets, which also includes: [NRK-Quiz-QA](https://huggingface.co/datasets/ltg/nrk_quiz_qa), [NorOpenBookQA](https://huggingface.co/datasets/ltg/noropenbookqa), [NorTruthfulQA (Multiple Choice)](https://huggingface.co/datasets/ltg/nortruthfulqa_mc), and [NorTruthfulQA (Generation)](https://huggingface.co/datasets/ltg/nortruthfulqa_gen). We describe our high-level dataset creation approach here and provide more details, general statistics, and model evaluation results in our paper.
76
+
77
+ - **Curated by:** The [Language Technology Group](https://www.mn.uio.no/ifi/english/research/groups/ltg/) (LTG) at the University of Oslo
78
+ - **Language:** Norwegian (Bokmål and Nynorsk)
79
+ - **Repository:** [github.com/ltgoslo/norqa](https://github.com/ltgoslo/norqa)
80
+ - **Paper:** [arxiv.org/abs/2501.11128](https://arxiv.org/abs/2501.11128) (to be presented at NoDaLiDa/Baltic-HLT 2025)
81
+ - **License:** MIT
82
+
83
+
84
+ ### Uses
85
+
86
+ NorCommonSenseQA is intended to be used for zero-shot evaluation of language models for Norwegian.
87
+
88
+ ## Dataset Creation
89
+
90
+ NorCommonSenseQA is created by adapting the [CommonSenseQA](https://huggingface.co/datasets/tau/commonsense_qa) dataset for English via a two-stage annotation. Our annotation team consists of 21 BA/BSc and MA/MSc students in linguistics and computer science, all native Norwegian speakers. The team is divided into two groups: 19 annotators focus on Bokmål, while two annotators work on Nynorsk.
91
+
92
+
93
+ <details>
94
+ <summary><b>Stage 1: Human annotation and translation</b></summary>
95
+ The annotation task here involves adapting the English examples from NorCommonSenseQA using two strategies.
96
+
97
+ 1. **Manual translation and localization**: The annotators manually translate the original examples, with localization that reflects Norwegian contexts where necessary.
98
+ 2. **Creative adaptation**: The annotators create new examples in Bokmål and Nynorsk from scratch, drawing inspiration from the shown English examples.
99
+ </details>
100
+
101
+
102
+ <details>
103
+ <summary><b>Stage 2: Data Curation</b></summary>
104
+ This stage aims to filter out low-quality examples collected during the first stage. Due to resource constraints, we have curated 91% of the examples (998 out of 1093), with each example validated by a single annotator. Each annotator receives pairs of the original and translated/localized examples or newly created examples for review. The annotation task here involves two main steps.
105
+
106
+ 1. **Quality judgment**: The annotators judge the overall quality of an example and label any example that is of low quality or requires a substantial revision. Examples like this are not included in our datasets.
107
+ 2. **Quality control**: The annotators judge spelling, grammar, and natural flow of an example, making minor edits if needed.
108
+ </details>
109
+
110
+
111
+ #### Personal and Sensitive Information
112
+
113
+ The dataset does not contain information considered personal or sensitive.
114
+
115
+ ## Dataset Structure
116
+
117
+ ### Dataset Instances
118
+
119
+ Each dataset instance looks as follows:
120
+
121
+ #### Bokmål
122
+
123
+ ```
124
+ {
125
+ 'id': 'ec882fc3a9bfaeae2a26fe31c2ef2c07',
126
+ 'question': 'Hvor plasserer man et pizzastykke før man spiser det?',
127
+ 'choices': {
128
+ 'label': ['A', 'B', 'C', 'D', 'E'],
129
+ 'text': [
130
+ 'På bordet',
131
+ 'På en tallerken',
132
+ 'På en restaurant',
133
+ 'I ovnen',
134
+ 'Populær'
135
+ ],
136
+ },
137
+ 'answer': 'B',
138
+ 'curated': True
139
+ }
140
+ ```
141
+
142
+ #### Nynorsk
143
+
144
+ ```
145
+ {
146
+ 'id': 'd35a8a3bd560fdd651ecf314878ed30f-78',
147
+ 'question': 'Viss du skulle ha steikt noko kjøt, kva ville du ha sett kjøtet på?',
148
+ 'choices': {
149
+ 'label': ['A', 'B', 'C', 'D', 'E'],
150
+ 'text': [
151
+ 'Olje',
152
+ 'Ein frysar',
153
+ 'Eit skinkesmørbrød',
154
+ 'Ein omn',
155
+ 'Ei steikepanne'
156
+ ]
157
+ },
158
+ 'answer': 'E',
159
+ 'curated': False
160
+ }
161
+ ```
162
+
163
+ ### Dataset Fields
164
+
165
+ `id`: an example id \
166
+ `question`: a question \
167
+ `choices`: answer choices (`label`: a list of labels; `text`: a list of possible answers) \
168
+ `answer`: the correct answer from the list of labels (A/B/C/D/E) \
169
+ `curated`: an indicator of whether an example has been curated or not
170
+
171
+
172
+ ## Dataset Card Contact
173
+
174
+ * Vladislav Mikhailov (vladism@ifi.uio.no)
175
+ * Lilja Øvrelid (liljao@ifi.uio.no)