Datasets:
license: cc-by-nc-4.0
language:
- yue
pretty_name: SBS Cantonese Speech Corpus
size_categories:
- 100K<n<1M
SBS Cantonese Speech Corpus
This speech corpus contains 435 hours of SBS Cantonese podcasts from Auguest 2022 to October 2023. There are 2,519 episodes and each episode is split into segments that are at most 10 seconds long. In total, there are 189,216 segments in this corpus. Here is a breakdown on the categories of episodes present in this dataset:
Category | SBS Channels | Episodes |
---|---|---|
news | 中文新聞, 新聞簡報 | 622 |
business | 寰宇金融 | 148 |
vaccine | 疫苗快報 | 71 |
gardening | 園藝趣談 | 58 |
tech | 科技世界 | 56 |
health | 健康快樂人 | 53 |
culture | 文化360 | 49 |
english | 學英語 | 41 |
expert | 專家話你知 | 37 |
interview | 我不是名人 | 20 |
career | 澳洲招職 | 18 |
food | 美食速遞 | 18 |
uncategorized | n/a | 1328 |
- Uncategorized episodes are mostly news but also contains other categories listed above.
Dataset Details
Dataset Description
- Curated by: Kevin Li
- Language(s): Cantonese, English (only in podcasts categorized as "english")
- License: Creative Commons Attribution Non-Commercial 4.0
Scraper
- Repository: https://github.com/AlienKevin/sbs_cantonese
Uses
Each episode is split into segments using silero-vad. Since silero-vad is not trained on Cantonese data, the segmentation is not ideal and often break sentences in the middle. Hence, this dataset is not intended to be used for supervised ASR. Instead, it is intended to be used for self-supervised speech pretraining, like training WavLM, HuBERT, and Wav2Vec.
Format
Each segment is stored as a monochannel FLAC file with a sample rate of 16k Hz. You can find the segments under the audio/
folder,
where groups of segments are bundled into a .tar.gz file for ease of distribution.
The filename of the segment shows which episodes it belongs to and place of it within that episode: For example, here's a filename:
0061gy0w8_0000_5664_81376
where
0061gy0w8
is the episode id0000
means that it is the first segment of that episode5664
is the starting sample of this segment. Remember all episodes are sampled at 16k Hz, so the total number of samples in an episode is (the duration in seconds * 16,000).81376
is the ending (exclusive) sample of this segment.
Metadata
Metadata for each episode is stored in the metadata.jsonl
file, where each line stores the metadata for one episode:
Here's the metadata for one of the episodes (split into multiple lines for clarity):
{
"title": "SBS 中文新聞 (7月5日)",
"date": "05/07/2023",
"view_more_link": "https://www.sbs.com.au/language/chinese/zh-hant/podcast-episode/chinese-news-5-7-2023/tl6s68rdk",
"download_link": "https://sbs-podcast.streamguys1.com/sbs-cantonese/20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0.mp3?awCollectionId=sbs-cantonese&awGenre=News&awEpisodeId=20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0"
}
where
title
is the title of the episodedate
is the date when the episode is publishedview_more_link
is a link to the associated article/description for this episode. Many news episodes have extremely detailed manuscripts written in Traditional Chinese while others have briefer summaries or key points available.download_link
is the link to download the audio for this episode. It is usually hosted on streamguys but some earlier episodes are stored SBS's own server at https://images.sbs.com.au.
The id of each episode appears at the end of its view_more_link
. It appears to be a precomputed hash that is unique to each episode.
id = view_more_link.split("/")[-1]