Rasmus Arpe Fogh Egebæk commited on
Commit
565f35e
1 Parent(s): 86ee95e

Initial commit of nota dataset

Browse files
Files changed (2) hide show
  1. README.md +108 -1
  2. nota_dataset.py +128 -0
README.md CHANGED
@@ -1,3 +1,110 @@
1
  ---
2
- license: cc0-1.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ pretty_name: Nota
3
+
4
+ license:
5
+ - cc0-1.0
6
+ language:
7
+ - da
8
+ multilinguality:
9
+ - monolingual
10
+ task_categories:
11
+ - automatic-speech-recognition
12
+
13
  ---
14
+ # Dataset Card Nota Lyd- og tekstdata
15
+ ## Table of Contents
16
+ - [Dataset Description](#dataset-description)
17
+ - [Dataset Summary](#dataset-summary)
18
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
19
+ - [Languages](#languages)
20
+ - [Dataset Structure](#dataset-structure)
21
+ - [Data Instances](#data-instances)
22
+ - [Data Fields](#data-fields)
23
+ - [Data Splits](#data-splits)
24
+ - [Dataset Creation](#dataset-creation)
25
+ - [Disclaimer](#disclaimer)
26
+ - [Curation Rationale](#curation-rationale)
27
+ - [Source Data](#source-data)
28
+ - [Annotations](#annotations)
29
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
30
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
31
+ - [Social Impact of Dataset](#social-impact-of-dataset)
32
+ - [Discussion of Biases](#discussion-of-biases)
33
+ - [Other Known Limitations](#other-known-limitations)
34
+ - [Additional Information](#additional-information)
35
+ - [Dataset Curators](#dataset-curators)
36
+ - [Licensing Information](#licensing-information)
37
+ ## Dataset Description
38
+ - **Homepage:** https://sprogteknologi.dk/dataset/notalyd-ogtekstdata
39
+ - **Data Storage Url:** https://sprogtek-ressources.digst.govcloud.dk/nota/
40
+ - **Point of Contact:** info@sprogteknologi.dk
41
+ ### Dataset Summary
42
+ This data was created by the public institution Nota (https://nota.dk/), which is part of the Danish Ministry of Culture. Nota has a library audiobooks and audiomagazines for people with reading or sight disabilities. Nota also produces a number of audiobooks and audiomagazines themselves.
43
+
44
+ The dataset consists of .wav and .txt files from Nota's audiomagazines "Inspiration" and "Radio/TV".
45
+
46
+ The dataset has been published as a part of the initiative sprogteknologi.dk, within the Danish Agency for Digital Government (www.digst.dk).
47
+
48
+ 336 GB available data, containing voice recordings and accompanying transcripts.
49
+
50
+ Each publication has been segmented into bits of 2 - 50 seconds .wav files with an accompanying transcription
51
+
52
+
53
+ ### Supported Tasks and Leaderboards
54
+ [Needs More Information]
55
+ ### Languages
56
+ Danish
57
+ ## Dataset Structure
58
+ ### Data Instances
59
+ A typical data point comprises the path to the audio file, called path and its sentence.
60
+ `
61
+ {'path': '<path_to_clip>.wav', 'sentence': 'Dette er et eksempel', 'audio': {'path': <path_to_clip>.wav', 'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32), 'sampling_rate': 44100}
62
+ `
63
+ ### Data Fields
64
+ path: The path to the audio file
65
+
66
+ audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
67
+
68
+ sentence: The sentence that was read by the speaker
69
+ ### Data Splits
70
+ The material has for now only a train split. As this is very early stage of the dataset, splits might be introduced at a later stage.
71
+
72
+
73
+ ## Dataset Creation
74
+
75
+ ### Disclaimer
76
+ There might be smaller discrepancies between the .wav and .txt files. Therefore, there might be issues in the alignment of timestamps, text and sound files.
77
+
78
+ There are no strict rules as to how readers read aloud non-letter characters (i.e. numbers, €, $, !, ?). These symbols can be read differently throughout the dataset.
79
+
80
+ ### Curation Rationale
81
+ [Needs More Information]
82
+ ### Source Data
83
+ #### Initial Data Collection and Normalization
84
+ [Needs More Information]
85
+ #### Who are the source language producers?
86
+ [Needs More Information]
87
+ ### Annotations
88
+ #### Annotation process
89
+ [Needs More Information]
90
+ #### Who are the annotators?
91
+ [Needs More Information]
92
+ ### Personal and Sensitive Information
93
+ The dataset is made public and free to use. Recorded individuals has by written contract accepted and agreed to the publication of their recordings.
94
+ Other names appearing in the dataset are already publically known individuals (i.e. TV or Radio host names). Their names are not to be treated as sensitive or personal data in the context of this dataset.
95
+ ## Considerations for Using the Data
96
+ ### Social Impact of Dataset
97
+ [More Information Needed]
98
+ ### Discussion of Biases
99
+ [More Information Needed]
100
+ ### Other Known Limitations
101
+ [More Information Needed]
102
+ ## Additional Information
103
+ ### Dataset Curators
104
+ https://sprogteknologi.dk/
105
+
106
+ Contact info@sprogteknologi.dk if you have questions regarding use of data.
107
+ They gladly receive inputs and ideas on how to distribute the data.
108
+ ### Licensing Information
109
+ [CC0-1.0](https://creativecommons.org/publicdomain/zero/1.0/)
110
+ ###
nota_dataset.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+
4
+ import datasets
5
+ import requests
6
+ from datasets import AutomaticSpeechRecognition
7
+
8
+ _DATA_URLS = ["https://sprogtek-ressources.digst.govcloud.dk/nota/Inspiration%202016%20-%202021/",
9
+ "https://sprogtek-ressources.digst.govcloud.dk/nota/Inspiration%202008%20-%202016/",
10
+ "https://sprogtek-ressources.digst.govcloud.dk/nota/Radio-TV%20program%202007%20-%202012/",
11
+ "https://sprogtek-ressources.digst.govcloud.dk/nota/Radio-TV%20Program%202013%20-%202015/",
12
+ "https://sprogtek-ressources.digst.govcloud.dk/nota/Radio-TV%20Program%202016%20-%202018/",
13
+ "https://sprogtek-ressources.digst.govcloud.dk/nota/Radio-TV%20Program%202019%20-%202022/"
14
+ ]
15
+
16
+ _DESCRIPTION = """\
17
+ Nota lyd- og tekstdata
18
+ Datasættet indeholder både tekst- og taledata fra udvalgte dele af Nota's lydbogsbiblotek. Datasættet består af
19
+ over 500 timers oplæsninger og medfølgende transkriptioner på dansk. Al lyddata er i .wav-format, mens tekstdata
20
+ er i .txt-format.
21
+
22
+ I data indgår indlæsninger af Notas eget blad "Inspiration" og "Radio/TV", som er udgivet i perioden 2007 til 2022.
23
+ Nota krediteres for arbejdet med at strukturere data, således at tekst og lyd stemmer overens.
24
+
25
+ Nota er en institution under Kulturministeriet, der gør trykte tekster tilgængelige i digitale formater til personer
26
+ med synshandicap og læsevanskeligheder, fx via produktion af lydbøger og oplæsning af aviser, magasiner, mv.
27
+ """
28
+
29
+ _HOMEPAGE = "https://sprogteknologi.dk/dataset/notalyd-ogtekstdata"
30
+
31
+ _LICENSE = "https://creativecommons.org/publicdomain/zero/1.0/"
32
+
33
+
34
+ def extract_file_links():
35
+ """
36
+ Extracts the web locations of the zip files containing the data
37
+ :return: List of web urls
38
+ """
39
+ download_paths = []
40
+
41
+ download_files_regex = re.compile("<a href=\"(.+?)\">")
42
+
43
+ for download_root in _DATA_URLS:
44
+ r = requests.get(download_root)
45
+ all_files = download_files_regex.findall(str(r.content))
46
+
47
+ # We ignore Parent and Readme files
48
+ all_files_filtered = filter(lambda x: x != "Readme.txt" and x != "/nota/", all_files)
49
+
50
+ for download_file in all_files_filtered:
51
+ # Empty file
52
+ if "INSL20210003.zip" in download_file:
53
+ continue
54
+
55
+ # Because of wget behaviour, we have to replace correct %20 with space
56
+ full_download_path = download_root + download_file
57
+ full_download_path = full_download_path.replace("%20", " ")
58
+ download_paths.append(full_download_path)
59
+
60
+ return download_paths
61
+
62
+
63
+ class NotaDanishSoundAndTextDataset(datasets.GeneratorBasedBuilder):
64
+ DEFAULT_CONFIG_NAME = "all"
65
+
66
+ def _info(self):
67
+ features = datasets.Features(
68
+ {
69
+ "audio": datasets.Audio(sampling_rate=44_100),
70
+ "sentence": datasets.Value("string"),
71
+ }
72
+ )
73
+
74
+ return datasets.DatasetInfo(
75
+ description=_DESCRIPTION,
76
+ features=features,
77
+ supervised_keys=None,
78
+ homepage=_HOMEPAGE,
79
+ license=_LICENSE,
80
+ task_templates=[AutomaticSpeechRecognition(audio_column="audio", transcription_column="sentence")],
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+ download_urls = extract_file_links()
85
+ dl_path = dl_manager.download_and_extract(download_urls)
86
+
87
+ return [
88
+ datasets.SplitGenerator(
89
+ name=datasets.Split.TRAIN,
90
+ gen_kwargs={
91
+ "dl_path": dl_path,
92
+ },
93
+ )
94
+ ]
95
+
96
+ @staticmethod
97
+ def _extract_transcript(file_path):
98
+ with open(file_path, "r", encoding="utf-8") as f:
99
+ data = f.read()
100
+ return data
101
+
102
+ def _generate_examples(self, dl_path):
103
+ key = 0
104
+ transcripts = {}
105
+
106
+ for parent_directory in dl_path:
107
+ parent_directory_path = os.listdir(os.path.join(dl_path, parent_directory))
108
+ for sub_directory in parent_directory_path:
109
+ data_directory_path = os.path.join(dl_path, parent_directory, sub_directory)
110
+ data_files = os.listdir(data_directory_path)
111
+ for data_file in data_files:
112
+ file_type = data_file[-3:]
113
+ file_id = data_file[:-4]
114
+ if file_id not in transcripts:
115
+ transcripts[file_id] = {}
116
+
117
+ if file_type == "wav":
118
+ transcripts[file_id]["audio_path"] = os.path.join(data_directory_path, data_file)
119
+ elif file_type == "txt":
120
+ transcripts[file_id]["sentence"] = self._extract_transcript(
121
+ os.path.join(data_directory_path, data_file))
122
+
123
+ for sample_id, info in transcripts.items():
124
+ audio = {"path": info["audio_path"]}
125
+ yield key, {"audio": audio, "sentence": info["sentence"]}
126
+ key += 1
127
+
128
+ transcripts = {}