Jaegeon commited on
Commit
d7f5033
1 Parent(s): 962c8bd

First version of KsponSpeech

Browse files
Files changed (2) hide show
  1. README.md +162 -0
  2. ksponspeech.py +277 -0
README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - kr
6
+ language_creators:
7
+ - crowdsourced
8
+ license: []
9
+ multilinguality:
10
+ - monolingual
11
+ pretty_name: KsponSpeech
12
+ size_categories:
13
+ - 10K<n<100K
14
+ source_datasets:
15
+ - original
16
+ tags: []
17
+ task_categories:
18
+ - automatic-speech-recognition
19
+ task_ids: []
20
+ ---
21
+
22
+ # Dataset Card for [Dataset Name]
23
+
24
+ ## Table of Contents
25
+ - [Table of Contents](#table-of-contents)
26
+ - [Dataset Description](#dataset-description)
27
+ - [Dataset Summary](#dataset-summary)
28
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
29
+ - [Languages](#languages)
30
+ - [Dataset Structure](#dataset-structure)
31
+ - [Data Instances](#data-instances)
32
+ - [Data Fields](#data-fields)
33
+ - [Data Splits](#data-splits)
34
+ - [Dataset Creation](#dataset-creation)
35
+ - [Curation Rationale](#curation-rationale)
36
+ - [Source Data](#source-data)
37
+ - [Annotations](#annotations)
38
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
39
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
40
+ - [Social Impact of Dataset](#social-impact-of-dataset)
41
+ - [Discussion of Biases](#discussion-of-biases)
42
+ - [Other Known Limitations](#other-known-limitations)
43
+ - [Additional Information](#additional-information)
44
+ - [Dataset Curators](#dataset-curators)
45
+ - [Licensing Information](#licensing-information)
46
+ - [Citation Information](#citation-information)
47
+ - [Contributions](#contributions)
48
+
49
+ ## Dataset Description
50
+
51
+ - **Homepage:** [AIHub](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123)
52
+ - **Repository:**
53
+ - **Paper:** [KsponSpeech](https://www.mdpi.com/2076-3417/10/19/6936)
54
+ - **Leaderboard:**
55
+ - **Point of Contact:**
56
+
57
+ ### Dataset Summary
58
+
59
+ This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.
60
+
61
+ ### Supported Tasks and Leaderboards
62
+
63
+ [More Information Needed]
64
+
65
+ ### Languages
66
+
67
+ Korean
68
+
69
+ ## Dataset Structure
70
+
71
+ ### Data Instances
72
+
73
+ ```json
74
+ {
75
+ 'id': 'KsponSpeech_E00001',
76
+ 'audio': {'path': None,
77
+ 'array': array([0.0010376 , 0.00085449, 0.00097656, ..., 0.00250244, 0.0022583 ,
78
+ 0.00253296]),
79
+ 'sampling_rate': 16000},
80
+ 'text': '어 일단은 억지로 과장해서 이렇게 하는 것보다 진실된 마음으로 이걸 어떻게 전달할 수 있을까 공감을 시킬 수 있을까 해서 좀'
81
+ }
82
+ ```
83
+
84
+ ### Data Fields
85
+
86
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
87
+ - text: the transcription of the audio file.
88
+ - id: unique id of the data sample.
89
+
90
+ ### Data Splits
91
+
92
+ [More Information Needed]
93
+
94
+ ## Dataset Creation
95
+
96
+ ### Curation Rationale
97
+
98
+ [More Information Needed]
99
+
100
+ ### Source Data
101
+
102
+ #### Initial Data Collection and Normalization
103
+
104
+ [More Information Needed]
105
+
106
+ #### Who are the source language producers?
107
+
108
+ [More Information Needed]
109
+
110
+ ### Annotations
111
+
112
+ #### Annotation process
113
+
114
+ [More Information Needed]
115
+
116
+ #### Who are the annotators?
117
+
118
+ [More Information Needed]
119
+
120
+ ### Personal and Sensitive Information
121
+
122
+ [More Information Needed]
123
+
124
+ ## Considerations for Using the Data
125
+
126
+ ### Social Impact of Dataset
127
+
128
+ [More Information Needed]
129
+
130
+ ### Discussion of Biases
131
+
132
+ [More Information Needed]
133
+
134
+ ### Other Known Limitations
135
+
136
+ [More Information Needed]
137
+
138
+ ## Additional Information
139
+
140
+ ### Dataset Curators
141
+
142
+ [More Information Needed]
143
+
144
+ ### Licensing Information
145
+
146
+ [More Information Needed]
147
+
148
+ ### Citation Information
149
+
150
+ @Article{app10196936,
151
+ AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
152
+ TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
153
+ JOURNAL = {Applied Sciences},
154
+ VOLUME = {10},
155
+ YEAR = {2020},
156
+ NUMBER = {19},
157
+ ARTICLE-NUMBER = {6936},
158
+ URL = {https://www.mdpi.com/2076-3417/10/19/6936},
159
+ ISSN = {2076-3417},
160
+ ABSTRACT = {This paper introduces a large-scale spontaneous speech corpus of Korean, named KsponSpeech. This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.},
161
+ DOI = {10.3390/app10196936}
162
+ }
ksponspeech.py ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition."""
18
+
19
+
20
+ import os
21
+ import re
22
+ from pathlib import Path
23
+
24
+ import datasets
25
+ import numpy as np
26
+ import librosa
27
+ from datasets.tasks import AutomaticSpeechRecognition
28
+
29
+
30
+ _CITATION = """\
31
+ @Article{app10196936,
32
+ AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
33
+ TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
34
+ JOURNAL = {Applied Sciences},
35
+ VOLUME = {10},
36
+ YEAR = {2020},
37
+ NUMBER = {19},
38
+ ARTICLE-NUMBER = {6936},
39
+ URL = {https://www.mdpi.com/2076-3417/10/19/6936},
40
+ ISSN = {2076-3417},
41
+ DOI = {10.3390/app10196936}
42
+ }
43
+ """
44
+
45
+ _DESCRIPTION = """\
46
+ This paper introduces a large-scale spontaneous speech corpus of Korean, named KsponSpeech. This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.
47
+
48
+ More info on KsponSpeech dataset can be understood from the webpage which can be found here:
49
+ https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123
50
+ """
51
+
52
+ _HOMEPAGE = "https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=123"
53
+
54
+ _ROOT_DIRNAME = "ksponspeech"
55
+ _SCRIPT_DIRNAME = "KsponSpeech_scripts"
56
+
57
+ _SCRIPT_SPLITS = {
58
+ "train": "train.trn",
59
+ "dev": "dev.trn",
60
+ "eval_clean": "eval_clean.trn",
61
+ "eval_other": "eval_other.trn"
62
+ }
63
+
64
+ class KsponSpeechConfig(datasets.BuilderConfig):
65
+ """BuilderConfig for KsponSpeech."""
66
+
67
+ def __init__(self, **kwargs):
68
+ """
69
+ Args:
70
+ data_dir: `string`, the path to the folder containing the files in the
71
+ downloaded .tar
72
+ citation: `string`, citation for the data set
73
+ url: `string`, url for information about the data set
74
+ **kwargs: keyword arguments forwarded to super.
75
+ """
76
+ # version history
77
+ # 0.1.0: First release
78
+ super(KsponSpeechConfig, self).__init__(version=datasets.Version("0.1.0", ""), **kwargs)
79
+
80
+
81
+
82
+ class KsponSpeech(datasets.GeneratorBasedBuilder):
83
+ """KsponSpeech dataset."""
84
+
85
+ @property
86
+ def manual_download_instructions(self):
87
+ return (
88
+ "To use KsponSpeech you have to download it manually. "
89
+ "Please create an account and download the dataset from "
90
+ "https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&dataSetSn=123 \n"
91
+ "Then load the dataset with: "
92
+ "`datasets.load_dataset('ksponspeech', data_dir='path/to/folder/folder_name')`"
93
+ )
94
+
95
+ def _info(self):
96
+ return datasets.DatasetInfo(
97
+ description=_DESCRIPTION,
98
+ features=datasets.Features(
99
+ {
100
+ "id": datasets.Value("string"),
101
+ "audio": datasets.Audio(sampling_rate=16_000),
102
+ "text": datasets.Value("string")
103
+ }
104
+ ),
105
+ supervised_keys=("file", "text"),
106
+ homepage=_HOMEPAGE,
107
+ citation=_CITATION,
108
+ task_templates=[AutomaticSpeechRecognition(audio_column="audio", transcription_column="text")],
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ # Step 1. Extract all zip files
113
+ # Step 2. Get scripts
114
+ # Step 3. Generate samples
115
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
116
+ data_dir = os.path.join(data_dir, _ROOT_DIRNAME)
117
+ if not os.path.exists(data_dir):
118
+ raise FileNotFoundError(
119
+ f"{data_dir} does not exist. Make sure you insert a manual dir via"
120
+ "`datasets.load_dataset('ksponspeech', data_dir=...)`"
121
+ "that includes files. Manual download instructions:"
122
+ f"{self.manual_download_instructions}"
123
+ )
124
+ archive_paths = {}
125
+ for fname in os.listdir(data_dir):
126
+ if not '.lock' in fname:
127
+ fname_no_ext = os.path.splitext(fname)[0]
128
+ archive_paths[fname_no_ext] = os.path.join(data_dir, fname)
129
+ local_extracted_archives = dl_manager.extract(archive_paths)
130
+ script_archive_path = local_extracted_archives[_SCRIPT_DIRNAME]
131
+ return [
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TRAIN,
134
+ gen_kwargs={
135
+ "script": os.path.join(script_archive_path, _SCRIPT_SPLITS['train']),
136
+ "local_extracted_archives": local_extracted_archives
137
+ }
138
+ ),
139
+ datasets.SplitGenerator(
140
+ name=datasets.Split.VALIDATION,
141
+ gen_kwargs={
142
+ "script": os.path.join(script_archive_path, _SCRIPT_SPLITS['dev']),
143
+ "local_extracted_archives": local_extracted_archives
144
+ }
145
+ ),
146
+ datasets.SplitGenerator(
147
+ name="eval.clean",
148
+ gen_kwargs={
149
+ "script": os.path.join(script_archive_path, _SCRIPT_SPLITS['eval_clean']),
150
+ "local_extracted_archives": local_extracted_archives
151
+ }
152
+ ),
153
+ datasets.SplitGenerator(
154
+ name="eval.other",
155
+ gen_kwargs={
156
+ "script": os.path.join(script_archive_path, _SCRIPT_SPLITS['eval_other']),
157
+ "local_extracted_archives": local_extracted_archives
158
+ }
159
+ ),
160
+ ]
161
+
162
+ def _generate_examples(self, script, local_extracted_archives):
163
+ """Generate examples from KsponSpeech archive_path based on the test/train trn information."""
164
+ # Iterating the contents of the data to extract the relevant information
165
+ with open(script) as f:
166
+ for key, line in enumerate(f):
167
+ audio_path, text = line.split(' :: ')
168
+ audio_subdir = audio_path.split('/')[0]
169
+ if os.path.basename(audio_path)[12:18] in PERCENT_FILES.keys():
170
+ replace = PERCENT_FILES[os.path.basename(audio_path)[12:18]]
171
+ else:
172
+ replace = None
173
+ text = sentence_filter(text, replace=replace).strip()
174
+ if 'KsponSpeech_eval/' in audio_path:
175
+ audio_path = audio_path.replace('KsponSpeech_eval/','')
176
+ audio_path = os.path.join(local_extracted_archives[audio_subdir], audio_path)
177
+ if os.path.exists(audio_path):
178
+ with open(audio_path, 'rb') as audio_file:
179
+ audio_data = audio_file.read()
180
+ if len(audio_data) % 2 != 0:
181
+ # Remove unknown additional bytes in KspoonSpeech_eval
182
+ audio_data = audio_data[:-1]
183
+ audio = {
184
+ "path": audio_path,
185
+ "bytes": audio_data,
186
+ "sampling_rate": 16_000
187
+ }
188
+ yield key, {
189
+ "id": os.path.splitext(os.path.basename(audio_path))[0],
190
+ "audio": audio,
191
+ "text": text
192
+ }
193
+
194
+ # ------------------------------------------------------------------------
195
+ # following codes are copied from https://github.com/sooftware/ksponspeech
196
+
197
+ PERCENT_FILES = {
198
+ '087797': '퍼센트',
199
+ '215401': '퍼센트',
200
+ '284574': '퍼센트',
201
+ '397184': '퍼센트',
202
+ '501006': '프로',
203
+ '502173': '프로',
204
+ '542363': '프로',
205
+ '581483': '퍼센트'
206
+ }
207
+
208
+ def bracket_filter(sentence, mode='phonetic'):
209
+ new_sentence = str()
210
+
211
+ if mode == 'phonetic':
212
+ flag = False
213
+
214
+ for ch in sentence:
215
+ if ch == '(' and flag is False:
216
+ flag = True
217
+ continue
218
+ if ch == '(' and flag is True:
219
+ flag = False
220
+ continue
221
+ if ch != ')' and flag is False:
222
+ new_sentence += ch
223
+
224
+ elif mode == 'spelling':
225
+ flag = True
226
+
227
+ for ch in sentence:
228
+ if ch == '(':
229
+ continue
230
+ if ch == ')':
231
+ if flag is True:
232
+ flag = False
233
+ continue
234
+ else:
235
+ flag = True
236
+ continue
237
+ if ch != ')' and flag is True:
238
+ new_sentence += ch
239
+
240
+ else:
241
+ raise ValueError("Unsupported mode : {0}".format(mode))
242
+
243
+ return new_sentence
244
+
245
+
246
+ def special_filter(sentence, mode='phonetic', replace=None):
247
+ SENTENCE_MARK = ['?', '!', '.']
248
+ NOISE = ['o', 'n', 'u', 'b', 'l']
249
+ EXCEPT = ['/', '+', '*', '-', '@', '$', '^', '&', '[', ']', '=', ':', ';', ',']
250
+
251
+ new_sentence = str()
252
+ for idx, ch in enumerate(sentence):
253
+ if ch not in SENTENCE_MARK:
254
+ if idx + 1 < len(sentence) and ch in NOISE and sentence[idx + 1] == '/':
255
+ continue
256
+
257
+ if ch == '#':
258
+ new_sentence += '샾'
259
+
260
+ elif ch == '%':
261
+ if mode == 'phonetic':
262
+ new_sentence += replace
263
+ elif mode == 'spelling':
264
+ new_sentence += '%'
265
+
266
+ elif ch not in EXCEPT:
267
+ new_sentence += ch
268
+
269
+ pattern = re.compile(r'\s\s+')
270
+ new_sentence = re.sub(pattern, ' ', new_sentence.strip())
271
+ return new_sentence
272
+
273
+
274
+ def sentence_filter(raw_sentence, mode='phonetic', replace=None):
275
+ return special_filter(bracket_filter(raw_sentence, mode), mode, replace)
276
+
277
+