The dataset viewer is disabled because the authors forbid processing this dataset automatically and require the users to download the dataset files manually.

Dataset Card for KsponSpeech

Dataset Summary

This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.

Supported Tasks and Leaderboards

[More Information Needed]

Languages

Korean

Dataset Structure

Data Instances

{
  'id': 'KsponSpeech_E00001',
  'audio': {'path': None,
  'array': array([0.0010376 , 0.00085449, 0.00097656, ..., 0.00250244, 0.0022583 ,
          0.00253296]),
  'sampling_rate': 16000},
  'text': '์–ด ์ผ๋‹จ์€ ์–ต์ง€๋กœ ๊ณผ์žฅํ•ด์„œ ์ด๋ ‡๊ฒŒ ํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ์ง„์‹ค๋œ ๋งˆ์Œ์œผ๋กœ ์ด๊ฑธ ์–ด๋–ป๊ฒŒ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ์„๊นŒ ๊ณต๊ฐ์„ ์‹œํ‚ฌ ์ˆ˜ ์žˆ์„๊นŒ ํ•ด์„œ ์ข€'
}

Data Fields

  • audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
  • text: the transcription of the audio file.
  • id: unique id of the data sample.

Data Splits

Train Valid eval.clean eval.other
#samples 620000 2545 3000 3000

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@Article{app10196936,
AUTHOR = {Bang, Jeong-Uk and Yun, Seung and Kim, Seung-Hi and Choi, Mu-Yeol and Lee, Min-Kyu and Kim, Yeo-Jeong and Kim, Dong-Hyun and Park, Jun and Lee, Young-Jik and Kim, Sang-Hun},
TITLE = {KsponSpeech: Korean Spontaneous Speech Corpus for Automatic Speech Recognition},
JOURNAL = {Applied Sciences},
VOLUME = {10},
YEAR = {2020},
NUMBER = {19},
ARTICLE-NUMBER = {6936},
URL = {https://www.mdpi.com/2076-3417/10/19/6936},
ISSN = {2076-3417},
ABSTRACT = {This paper introduces a large-scale spontaneous speech corpus of Korean, named KsponSpeech. This corpus contains 969 h of general open-domain dialog utterances, spoken by about 2000 native Korean speakers in a clean environment. All data were constructed by recording the dialogue of two people freely conversing on a variety of topics and manually transcribing the utterances. The transcription provides a dual transcription consisting of orthography and pronunciation, and disfluency tags for spontaneity of speech, such as filler words, repeated words, and word fragments. This paper also presents the baseline performance of an end-to-end speech recognition model trained with KsponSpeech. In addition, we investigated the performance of standard end-to-end architectures and the number of sub-word units suitable for Korean. We investigated issues that should be considered in spontaneous speech recognition in Korean. KsponSpeech is publicly available on an open data hub site of the Korea government.},
DOI = {10.3390/app10196936}
}
Downloads last month
2
Edit dataset card

Models trained or fine-tuned on Murple/ksponspeech