NPSC / README.md
pere's picture
Update README.md
0bfdb94
---
annotations_creators:
- no-annotation
language_creators:
- found
languages:
- nb,no,nn
licenses:
- CC-ZERO
multilinguality:
- monolingual
pretty_name: NPSC
size_categories:
- 2G<n<1B
source_datasets:
- original
task_categories:
- sequence-modeling
task_ids:
- speech-modeling
---
# Dataset Card for NbAiLab/NPSC
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Data Fields](#data-fiels)
- [Dataset Creation](#dataset-creation)
- [Statistics](#statistics)
- [Document Types](#document-types)
- [Languages](#languages)
- [Publish Periode](#publish-periode)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://www.nb.no/sprakbanken/
- **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
- **Paper:** https://www.nb.no/sprakbanken/
- **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models.
**!!! UPDATE? 1-2 paragraphs. FOR PEOPLE NOT BOTHERING TO READ THE ARTICLE!!! We need a real description here. About one paragraph summing up that is on the main web page, and telling how this dataset is the same but different - ie it is in a streaming format... This is X days of transcripts of parliamentary speeches. Each day XX hours. Most without any manuscript. They are split into XXXXX based on.... The id allows you to combine this into single sound files....**
## How to Use
```python
# Loads the 16K Bokmål corpus in streaming mode
from datasets import load_dataset
data = load_dataset("NbAiLab/NPSC", config="16K_mp3_bokmaal", streaming=True)
```
### Dataset Summary
The NPSC dataset contains json lines with language training data. Here is an example json line:
```json
{
"sentence_id": 49853,
"sentence_order": 0,
"speaker_id": 32,
"speaker_name": "Olemic Thommessen",
"sentence_text": "Stortingets møte er lovlig satt",
"sentence_language_code": "nb-NO",
"text": "Stortingets møte er lovlig satt",
"start_time": 320246, "end_time": 323590,
"normsentence_text": "Stortingets møte er lovlig satt",
"transsentence_text": "Stortingets møte er lovleg sett",
"translated": 1,
"audio": {"path": "audio/20170110-095504_320246_323590.wav",
"array": [.......]
}
}
```
## Data Fields (!!!!We really need a descrption here - and maybe some cleaning!!!)
|**id:** | String with id to source of line and a unique identifier|
|:-----------|:------------|
|**sentence_order** | String with order of sentence |
|**speaker id** | Integer id of speaker |
| **speaker_name** | String name of speaker |
| **sentence_text** | String sentence text |
| **sentence_language_code** | String sentence text (!!!LIST ALL ALTERNATIVES!!!)|
| **text** | String sentence text. This is a copy of "sentence_text". It is included here to make it more convenient to interleave with other datasets.|
| **start_time** | int start time |
| **end_time** | int end time |
| **normsentence_text** | String normalised sentence text |
| **transsentence_text** | String translated sentence text |
| **translated** | int text translated |
| **audio** | audio audio record with 'path',(mp3) 'array','sampling_rate' (48000) |
### Dataset Creation
We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
All files are gzipped. There is also a **test** split available for the dataset but this is hidden. Please contact [Per Erik Solberg](mailto:per.erik.solberg@nb.no) for access to the test set. !!!!!Verify that this is correct!!!!!!!!!!!!!!!
#### Initial Data Collection
The procedure for the dataset creation is described in detail in our paper.
## Statistics
| Feature | Value |
|:---------|-----------:|
| Duration, pauses included | 140,3 hours|
| Duration, pauses not included | 125,7 hours |
| Word count | 1,2 million |
| Sentence count | 64.531 |
| Language distribution | Nynorsk: 12,8%|
| | Bokmål: 87,2%|
| Gender distribution | Female: 38,3% |
| | Male: 61.7% |
!!!It would be great to know how many hours of Bokmål/Nynorsk/English(?) here!!!!!
## Considerations for Using the Data
This corpus contains speech data. All recordings are of parliament members in a public setting, and can be distributed without any restrains.
### Dataset Creators and Curators
The content of the dataset was done by Språkbanken. !!!!!!!!!!!!!!!!!!!!. [Javier de la Rosa](mailto:javier.rosa@nb.no), [Freddy Wetjen](mailto:Freddy.wetjen@nb.no), [Per Egil Kummervold](mailto:per.kummervold@nb.no), and [Andre Kaasen](mailto:andre.kasen@nb.no) all contributed in making this into a HuggingFace Dataset. Thanks to the HuggingFace team for assistance.
## License
The sound and the transcriptions are released under the [CC-ZERO-license](https://creativecommons.org/publicdomain/zero/1.0/). The curation of the HuggingFace Dataset is released under [CC-BY-SA-3-license](https://creativecommons.org/licenses/by-sa/3.0/).
### Citation Information
The following article gives detailed information about the corpus. Please refer to the article and this page if you are using this dataset:
```
@misc{solberg2022norwegian,
title={The Norwegian Parliamentary Speech Corpus},
author={Per Erik Solberg and Pablo Ortiz},
year={2022},
eprint={2201.10881},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```