File size: 6,444 Bytes
d0eabf1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e9ac41
d0eabf1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3e9ac41
d0eabf1
 
 
 
 
3e9ac41
d0eabf1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
---
annotations_creators:
- no-annotation
language_creators:
- found
languages:
- nb,no,nn
licenses:
- CC-ZERO
multilinguality:
- monolingual
pretty_name: NPSC
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- speech-modeling

---
# Dataset Card for NbAiLab/NPSC


## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)

## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)

The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models.

## How to Use
```python
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", streaming=True)
```
## Download Data
If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
```bash
# Clone the training set
git clone https://huggingface.co/datasets/NbAiLab/NPSC

# Create one large training file of all shards without unpacking
cat NPSC/data/train*.gz > onefile.json.gz
```

<details>
<summary>List of all the files.</summary>


* [eval](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/eval.json.gz)
* [test](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/test.json.gz)
* [train](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/train.json.gz)


</details>

### Dataset Summary
The NPSC dataset contains json lines with language training data. Here is an example json line:
```json

{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246, "end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav",
"array": [.......]
}

}


```
## Data Fields
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**sentence_order** | String with order of sentence |
|**speaker id** | Integer id of speaker |
| **speaker_name** | String name of speaker |
| **sentence_text** | String sentence text |
| **sentence_language_code** | String sentence text |
| **text** | String sentence text |
| **start_time** | int start time |
| **end_time** | int end time |
| **normsentence_text** | String normalised sentence text |
| **transsentence_text** | String translated sentence text |
| **translated** | int text translated |
| **audio** | audio audio record with 'path',(mp3) 'array','sampling_rate' (48000) |



### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped.

Build date: 22012022

#### Initial Data Collection and Curation
The procedure for the dataset creation is described in detail in our paper.


## Statistics
|   Feature | Value       |
|:---------|-----------:|
| Duration, pauses included     | 140,3 hours| 
| Duration, pauses not included     | 125,7 hours  | 
| Word count     | 1,2 million | 
| Sentence count     | 64.531 | 
| Language distribution     | Nynorsk: 12,8%| 
|      | Bokmål: 87,2%%| 
| Gender distribution     | Female: 38,3% | 
|      | Male: 61.7% | 

## Considerations for Using the Data
This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.

### Discussion of Biases
Please refer to our paper.

### Dataset Curators
[Freddy Wetjen](mailto:Freddy.wetjen@nb.no) and [Andre Kaasen](mailto:andre.kasen@nb.no)

### Licensing Information
Licensed for use outside the National Library of Norway.

## License
CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)

### Citation Information
We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
```
@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E  and
De la Rosa, Javier  and
Wetjen, Freddy  and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}
```