versae commited on
Commit
d0eabf1
1 Parent(s): 5b01d58

Adding data and docs

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. NPSC.py +4 -4
  2. README.md +180 -0
  3. data/eval/20170209.json +0 -0
  4. data/eval/20170209_48K_mp3.tar.gz +3 -0
  5. data/eval/20180109.json +0 -0
  6. data/eval/20180109_48K_mp3.tar.gz +3 -0
  7. data/eval/20180201.json +0 -0
  8. data/eval/20180201_48K_mp3.tar.gz +3 -0
  9. data/eval/20180307.json +0 -0
  10. data/eval/20180307_48K_mp3.tar.gz +3 -0
  11. data/eval/20180611.json +0 -0
  12. data/eval/20180611_48K_mp3.tar.gz +3 -0
  13. data/test/20170207.json +0 -0
  14. data/test/20170207_48K_mp3.tar.gz +3 -0
  15. data/test/20171122.json +0 -0
  16. data/test/20171122_48K_mp3.tar.gz +3 -0
  17. data/test/20171219.json +0 -0
  18. data/test/20171219_48K_mp3.tar.gz +3 -0
  19. data/test/20180530.json +0 -0
  20. data/test/20180530_48K_mp3.tar.gz +3 -0
  21. data/train/20170110.json +0 -0
  22. data/train/20170110_48K_mp3.tar.gz +3 -0
  23. data/train/20170208.json +0 -0
  24. data/train/20170208_48K_mp3.tar.gz +3 -0
  25. data/train/20170215.json +0 -0
  26. data/train/20170215_48K_mp3.tar.gz +3 -0
  27. data/train/20170216.json +0 -0
  28. data/train/20170216_48K_mp3.tar.gz +3 -0
  29. data/train/20170222.json +0 -0
  30. data/train/20170222_48K_mp3.tar.gz +3 -0
  31. data/train/20170314.json +0 -0
  32. data/train/20170314_48K_mp3.tar.gz +3 -0
  33. data/train/20170322.json +0 -0
  34. data/train/20170322_48K_mp3.tar.gz +3 -0
  35. data/train/20170323.json +0 -0
  36. data/train/20170323_48K_mp3.tar.gz +3 -0
  37. data/train/20170403.json +0 -0
  38. data/train/20170403_48K_mp3.tar.gz +3 -0
  39. data/train/20170405.json +0 -0
  40. data/train/20170405_48K_mp3.tar.gz +3 -0
  41. data/train/20170419.json +0 -0
  42. data/train/20170419_48K_mp3.tar.gz +3 -0
  43. data/train/20170426.json +0 -0
  44. data/train/20170426_48K_mp3.tar.gz +3 -0
  45. data/train/20170503.json +0 -0
  46. data/train/20170503_48K_mp3.tar.gz +3 -0
  47. data/train/20170510.json +0 -0
  48. data/train/20170510_48K_mp3.tar.gz +3 -0
  49. data/train/20170516.json +0 -0
  50. data/train/20170516_48K_mp3.tar.gz +3 -0
NPSC.py CHANGED
@@ -44,10 +44,10 @@ The corpus is in total sound recordings from 40 entire days of meetings. This am
44
 
45
  _HOMEPAGE = "https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/"
46
 
47
- # Example: https://huggingface.co/datasets/NB/NPSC/resolve/main/data/train/20170110_48K_mp3.tar.gz
48
- _DATA_URL = "https://huggingface.co/datasets/NB/NPSC/resolve/main/data/{split}/{shard}_{config}.tar.gz"
49
- # Example: https://huggingface.co/datasets/NB/NPSC/resolve/main/data/test/20170207.json
50
- _METADATA_URL = "https://huggingface.co/datasets/NB/NPSC/resolve/main/data/{split}/{shard}.json"
51
 
52
  _SHARDS = {
53
  "validation": ["20170209", "20180109", "20180201", "20180307", "20180611"],
44
 
45
  _HOMEPAGE = "https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-58/"
46
 
47
+ # Example: https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/train/20170110_48K_mp3.tar.gz
48
+ _DATA_URL = "https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/{split}/{shard}_{config}.tar.gz"
49
+ # Example: https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/test/20170207.json
50
+ _METADATA_URL = "https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/{split}/{shard}.json"
51
 
52
  _SHARDS = {
53
  "validation": ["20170209", "20180109", "20180201", "20180307", "20180611"],
README.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ YAML tags:
3
+
4
+ annotations_creators:
5
+ - no-annotation
6
+ language_creators:
7
+ - found
8
+ languages:
9
+ - nb,no,nn
10
+ licenses:
11
+ - CC-ZERO
12
+ multilinguality:
13
+ - monolingual
14
+ pretty_name: NPSC
15
+ size_categories:
16
+ - 2G<n<1B
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - sequence-modeling
21
+ task_ids:
22
+ - speech-modeling
23
+
24
+ ---
25
+ # Dataset Card for NBAiLab/NPSC
26
+
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Data Fields](#data-fiels)
32
+ - [Dataset Creation](#dataset-creation)
33
+ - [Statistics](#statistics)
34
+ - [Document Types](#document-types)
35
+ - [Languages](#languages)
36
+ - [Publish Periode](#publish-periode)
37
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
38
+ - [Social Impact of Dataset](#social-impact-of-dataset)
39
+ - [Discussion of Biases](#discussion-of-biases)
40
+ - [Other Known Limitations](#other-known-limitations)
41
+ - [Additional Information](#additional-information)
42
+ - [Dataset Curators](#dataset-curators)
43
+ - [Licensing Information](#licensing-information)
44
+ - [Citation Information](#citation-information)
45
+
46
+ ## Dataset Description
47
+ - **Homepage:** https://www.nb.no/sprakbanken/
48
+ - **Repository:** https://www.nb.no/sprakbanken/ressurskatalog/oai-nb-no-sbr-58/
49
+ - **Paper:** https://www.nb.no/sprakbanken/
50
+ - **Point of Contact:** [Per Erik Solberg](mailto:per.solberg@nb.no)
51
+
52
+ The Norwegian Parliament Speech Corpus (NPSC) is a corpus for training a Norwegian ASR (Automatic Speech Recognition) models.
53
+
54
+ ## How to Use
55
+ ```python
56
+ from datasets import load_dataset
57
+ data = load_dataset("nb/NPSC", streaming=True)
58
+ ```
59
+ ## Download Data
60
+ If you do not want to use the HuggingFace Dataset-library for training, or if you want to do additional pre-processing, it is also possible to download the files locally.
61
+ ```bash
62
+ # Clone the training set
63
+ git clone https://huggingface.co/datasets/nb/NPSC
64
+
65
+ # Create one large training file of all shards without unpacking
66
+ cat NPSC/data/train*.gz > onefile.json.gz
67
+ ```
68
+
69
+ <details>
70
+ <summary>List of all the files.</summary>
71
+
72
+
73
+ * [eval](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/eval.json.gz)
74
+ * [test](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/test.json.gz)
75
+ * [train](https://huggingface.co/datasets/NbAiLab/NPSC/resolve/main/data/train.json.gz)
76
+
77
+
78
+ </details>
79
+
80
+ ### Dataset Summary
81
+ The NPSC dataset contains json lines with language training data. Here is an example json line:
82
+ ```json
83
+
84
+ {
85
+ "sentence_id": 49853,
86
+ "sentence_order": 0,
87
+ "speaker_id": 32,
88
+ "speaker_name": "Olemic Thommessen",
89
+ "sentence_text": "Stortingets møte er lovlig satt",
90
+ "sentence_language_code": "nb-NO",
91
+ "text": "Stortingets møte er lovlig satt",
92
+ "start_time": 320246, "end_time": 323590,
93
+ "normsentence_text": "Stortingets møte er lovlig satt",
94
+ "transsentence_text": "Stortingets møte er lovleg sett",
95
+ "translated": 1,
96
+ "audio": {"path": "audio/20170110-095504_320246_323590.wav",
97
+ "array": [.......]
98
+ }
99
+
100
+ }
101
+
102
+
103
+ ```
104
+ ## Data Fields
105
+ |**id:** | String with id to source of line and a unique identifier|
106
+ |:-----------|:------------|
107
+ |**sentence_order** | String with order of sentence |
108
+ |**speaker id** | Integer id of speaker |
109
+ | **speaker_name** | String name of speaker |
110
+ | **sentence_text** | String sentence text |
111
+ | **sentence_language_code** | String sentence text |
112
+ | **text** | String sentence text |
113
+ | **start_time** | int start time |
114
+ | **end_time** | int end time |
115
+ | **normsentence_text** | String normalised sentence text |
116
+ | **transsentence_text** | String translated sentence text |
117
+ | **translated** | int text translated |
118
+ | **audio** | audio audio record with 'path',(mp3) 'array','sampling_rate' (48000) |
119
+
120
+
121
+
122
+ ### Dataset Creation
123
+ We are providing a **train** and a **validation** split. The standard size of the validation is a single 1GB file, while train is sharded in 1GB chunks.
124
+ All files are gzipped.
125
+
126
+ Build date: 22012022
127
+
128
+ #### Initial Data Collection and Curation
129
+ The procedure for the dataset creation is described in detail in our paper.
130
+
131
+
132
+ ## Statistics
133
+ | Feature | Value |
134
+ |:---------|-----------:|
135
+ | Duration, pauses included | 140,3 hours|
136
+ | Duration, pauses not included | 125,7 hours |
137
+ | Word count | 1,2 million |
138
+ | Sentence count | 64.531 |
139
+ | Language distribution | Nynorsk: 12,8%|
140
+ | | Bokmål: 87,2%%|
141
+ | Gender distribution | Female: 38,3% |
142
+ | | Male: 61.7% |
143
+
144
+ ## Considerations for Using the Data
145
+ This corpus contains speech data and is allowed to be used outside the National Library of Norway for speech recognition technology purposes.
146
+
147
+ ### Discussion of Biases
148
+ Please refer to our paper.
149
+
150
+ ### Dataset Curators
151
+ [Freddy Wetjen](mailto:Freddy.wetjen@nb.no) and [Andre Kaasen](mailto:andre.kasen@nb.no)
152
+
153
+ ### Licensing Information
154
+ Licensed for use outside the National Library of Norway.
155
+
156
+ ## License
157
+ CC-ZERO(https://creativecommons.org/publicdomain/zero/1.0/)
158
+
159
+ ### Citation Information
160
+ We are preparing an article with detailed information about this corpus. Until it is published, please cite out paper discussing the first version of this corpus:
161
+ ```
162
+ @inproceedings{kummervold-etal-2021-operationalizing,
163
+ title = {Operationalizing a National Digital Library: The Case for a {N}orwegian Transformer Model},
164
+ author = {Kummervold, Per E and
165
+ De la Rosa, Javier and
166
+ Wetjen, Freddy and
167
+ Brygfjeld, Svein Arne",
168
+ booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)},
169
+ year = "2021",
170
+ address = "Reykjavik, Iceland (Online)",
171
+ publisher = {Link{"o}ping University Electronic Press, Sweden},
172
+ url = "https://aclanthology.org/2021.nodalida-main.3",
173
+ pages = "20--29",
174
+ abstract = "In this work, we show the process of building a large-scale training set from digital and digitized collections at a national library.
175
+ The resulting Bidirectional Encoder Representations from Transformers (BERT)-based language model for Norwegian outperforms multilingual BERT (mBERT) models
176
+ in several token and sequence classification tasks for both Norwegian Bokm{aa}l and Norwegian Nynorsk. Our model also improves the mBERT performance for other
177
+ languages present in the corpus such as English, Swedish, and Danish. For languages not included in the corpus, the weights degrade moderately while keeping strong multilingual properties. Therefore,
178
+ we show that building high-quality models within a memory institution using somewhat noisy optical character recognition (OCR) content is feasible, and we hope to pave the way for other memory institutions to follow.",
179
+ }
180
+ ```
data/eval/20170209.json ADDED
The diff for this file is too large to render. See raw diff
data/eval/20170209_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ed1e606946794021528407e05b11a4d7af2ebbb597f7fea13ece4dfe3083b4d
3
+ size 73709123
data/eval/20180109.json ADDED
The diff for this file is too large to render. See raw diff
data/eval/20180109_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7467f1e2d200c20c587624b152a6ed580f084f650927dbe9e166bb940935de6
3
+ size 82225926
data/eval/20180201.json ADDED
The diff for this file is too large to render. See raw diff
data/eval/20180201_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b6e48833dbd46a30a38ab751da341739abfbcbe1a4f21db4678163efc4766be
3
+ size 67332012
data/eval/20180307.json ADDED
The diff for this file is too large to render. See raw diff
data/eval/20180307_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07270d2b5e7f014acf776f339e7562e85cc5873003d9c13d6c6d488632438ce8
3
+ size 175623569
data/eval/20180611.json ADDED
The diff for this file is too large to render. See raw diff
data/eval/20180611_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:227cb6309e24c5752f2bcafbf7e508ecf527a66f6480dcad8f5d3713e4c93c19
3
+ size 318923983
data/test/20170207.json ADDED
The diff for this file is too large to render. See raw diff
data/test/20170207_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6074a745c657768e471adc27245bbdaf81d93c574ae31ffb0c1c6e1438e33e4e
3
+ size 59167326
data/test/20171122.json ADDED
The diff for this file is too large to render. See raw diff
data/test/20171122_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87ec8d8b33f7d61e171e22d59c9dc6364eaef5b045bb9af2a36963906189914d
3
+ size 155438682
data/test/20171219.json ADDED
The diff for this file is too large to render. See raw diff
data/test/20171219_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa99ef609b4ff2b8415f98f45f540b2b3cd528b595a3363abad77beeee3537ea
3
+ size 326640954
data/test/20180530.json ADDED
The diff for this file is too large to render. See raw diff
data/test/20180530_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a5200656274e752cc13a46bca697f660eb2804d77355be5f9369b69cb2c17100
3
+ size 138540914
data/train/20170110.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170110_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:595116e928622961bcdea6f9715b7197dd9c85b5f8da5ffc7823904aad93f89a
3
+ size 226819893
data/train/20170208.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170208_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0e4532d067561859194877fff5c354aef862ba58398fcfec0b05b3bc78350ec0
3
+ size 122588726
data/train/20170215.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170215_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd62caf39f83e521800b033fdf87df45a7337dde874dc0ad97415e1cd8b7f1a4
3
+ size 183726650
data/train/20170216.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170216_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad5676c77a60481fc0be8804b9b0e6d3c329265e652a0ff74581caabfba1642f
3
+ size 76742902
data/train/20170222.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170222_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c839d30cc8fd7e27a3e2750777ddf1963df701afa2eaa4c4de68c3bfb8188371
3
+ size 253967991
data/train/20170314.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170314_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b73e90000080fadcab27a387702a86fd09f9db3de2c218dd60d09c35a237bdc8
3
+ size 155183579
data/train/20170322.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170322_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d57c2eee940534585db03006882fbf71001f8f58a56a5eb27ac84c844229dac7
3
+ size 191764343
data/train/20170323.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170323_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b982f2bca947358cf1fd1c3d21089025a314172e2b353e3b431f632d72319e75
3
+ size 158366544
data/train/20170403.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170403_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:108b75255137a873244038002d7fb03168e0c7ee65da239c4c2cad6aa4a33a01
3
+ size 168593674
data/train/20170405.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170405_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:951c3ddc070c70ccdeec8ff043dd5d9f2490769b1d53d68bf2c66b32380f5e3c
3
+ size 99553097
data/train/20170419.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170419_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1647416eeafb97b7c0a765472209c8b383fbb4b997934c4f11de452aceaa76e6
3
+ size 77494287
data/train/20170426.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170426_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f828efdd711ecb67668d8ff0e02a1b4ccd6a26edb78a9668b91d33d07c275fa6
3
+ size 157913808
data/train/20170503.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170503_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:040e5a66522164ae7af9a5c8efbb6254723b69f7228390dab221e434433b0b4b
3
+ size 161553714
data/train/20170510.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170510_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05b92d6c7a2787cc1add35a5ff0122c3844ea888f4e1a390c98d9f16fe766287
3
+ size 216097718
data/train/20170516.json ADDED
The diff for this file is too large to render. See raw diff
data/train/20170516_48K_mp3.tar.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d6c58e17d45a2e68c3330edacf497fd811c2c0d72d4d9abe7eafd2a88a7f2296
3
+ size 65259272