NPSC / README.md
versae's picture
Update README.md
3e9ac41
|
raw
history blame
6.44 kB
metadata
annotations_creators:
  - no-annotation
language_creators:
  - found
languages:
  - nb,no,nn
licenses:
  - CC-ZERO
multilinguality:
  - monolingual
pretty_name: NPSC
size_categories:
  - 2G<n<1B
source_datasets:
  - original
task_categories:
  - sequence-modeling
task_ids:
  - speech-modeling

Dataset Card for NbAiLab/NPSC

Table of Contents

Dataset Description

The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models.

How to Use

from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", streaming=True)

Download Data

If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.

# Clone the training set
git clone https://huggingface.co/datasets/NbAiLab/NPSC

# Create one large training file of all shards without unpacking
cat NPSC/data/train*.gz > onefile.json.gz
List of all the files.

Dataset Summary

The NPSC dataset contains json lines with language training data. Here is an example json line:


{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246, "end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav",
"array": [.......]
}

}

Data Fields

id: String with id to source of line and a unique identifier
sentence_order String with order of sentence
speaker id Integer id of speaker
speaker_name String name of speaker
sentence_text String sentence text
sentence_language_code String sentence text
text String sentence text
start_time int start time
end_time int end time
normsentence_text String normalised sentence text
transsentence_text String translated sentence text
translated int text translated
audio audio audio record with 'path',(mp3) 'array','sampling_rate' (48000)

Dataset Creation

We are providing a train and a validation split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks. All files are gzipped.

Build date: 22012022

Initial Data Collection and Curation

The procedure for the dataset creation is described in detail in our paper.

Statistics

Feature Value
Duration, pauses included 140,3 hours
Duration, pauses not included 125,7 hours
Word count 1,2 million
Sentence count 64.531
Language distribution Nynorsk: 12,8%
Bokmål: 87,2%%
Gender distribution Female: 38,3%
Male: 61.7%

Considerations for Using the Data

This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.

Discussion of Biases

Please refer to our paper.

Dataset Curators

Freddy Wetjen and Andre Kaasen

Licensing Information

Licensed for use outside the National Library of Norway.

License

CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)

Citation Information

We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:

@inproceedings{kummervold-etal-2021-operationalizing,
title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
author = {Kummervold, Per E  and
De la Rosa, Javier  and
Wetjen, Freddy  and
Brygfjeld, Svein Arne",
booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
year = "2021",
address = "Reykjavik, Iceland (Online)",
publisher = {Link{"o}ping University Electronic Press, Sweden},
url = "https://aclanthology.org/2021.nodalida-main.3",
pages = "20--29",
abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
}