Datasets:

Languages:
Norwegian
Multilinguality:
monolingual
Size Categories:
1K<n<10K
Language Creators:
found
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
License:
File size: 5,411 Bytes
fe79a9e
 
 
 
 
f739353
fe79a9e
f739353
fe79a9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f2e7f8f
fe79a9e
 
 
dbb8862
f1e7e9e
dbb8862
f1e7e9e
 
d3c13cd
 
fe79a9e
 
 
 
 
 
 
 
 
d3c13cd
fe79a9e
d3c13cd
 
 
 
 
fe79a9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d3c13cd
fe79a9e
 
 
f2e7f8f
fe79a9e
 
f2e7f8f
 
fe79a9e
f2e7f8f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d3c13cd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
annotations_creators:
- expert-generated
language_creators:
- found
language:
- no
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
---

# Dataset Card Creation Guide

## Table of Contents

- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-instances)
  - [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## Dataset Description

- **Homepage:** N/A
- **Repository:** [GitHub](https://github.com/ltgoslo/NorBERT/)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** -

### Dataset Summary

This is a classification dataset created from a subset of the [Talk of Norway](https://www.nb.no/sprakbanken/ressurskatalog/oai-repo-clarino-uib-no-11509-123/). This dataset contains text phrases from the political parties Fremskrittspartiet and Sosialistisk Venstreparti. The dataset is annotated with the party the speaker, as well as a timestamp. The classification task is to, simply by looking at the text, being able to predict is the speech was done by a representative from Fremskrittspartiet or from SV.

### Supported Tasks and Leaderboards

This dataset is meant for classification. 

The first model tested on this dataset was [NB-BERT-base](NbAiLab/nb-bert-base), claiming a F1-score of [81.9](https://arxiv.org/abs/2104.09617)). The dataset was also used for testing the North-T5-models, where the XXL model ](https://huggingface.co/north/t5_xxl_NCC) claims a F1-score of 91.8. 

Please note that some of the text-fields are quite long. Truncating the text fields, for instance when switching tokenizers, will also make the task harder.


### Languages

The text in the dataset is in Norwegian.

## Dataset Structure

### Data Fields

- `text`: Text of a speech
- `date`: Date (`YYYY-MM-DD`) the speech was held
- `label`: Political party the speaker was associated with at the time
  - 0
  - 1
- `full_label`: Political party the speaker was associated with at the time
  - Fremskrittspartiet
  - Sosialistisk Venstreparti

### Data Splits

The dataset is split into a `train`, `validation`, and `test` split with the following sizes:

|                            | Tain   | Valid | Test  |
| -----                      | ------ | ----- | ----- |
| Number of examples         | 3600   | 1200  | 1200  |

The dataset is balanced on political party.

## Dataset Creation

This dataset is based on the publicly available information by Norwegian Parliament (Storting) and created by the National Library of Norway AI-Lab to benchmark their language models.

## Additional Information
The [Talk of Norway dataset](https://www.nb.no/sprakbanken/ressurskatalog/oai-repo-clarino-uib-no-11509-123/) is also available at the [LTG Talk-of-Norway Github](https://github.com/ltgoslo/talk-of-norway).

### Licensing Information

This work is licensed under a Creative Commons Attribution 4.0 International License.


### Citation Information
The following article can be quoted when referring to this dataset, since it is the first study that are using the dataset for evaluating a language model:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E  and
De la Rosa, Javier  and
Wetjen, Freddy  and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```