AlienKevin commited on
Commit
64ef3ed
1 Parent(s): fc83ce0

Add Format and Metadata to Use section

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -8,8 +8,8 @@ size_categories:
8
  ---
9
  # SBS Cantonese Speech Corpus
10
 
11
- This speech corpus contains 435 hours of [SBS Cantonese](https://www.sbs.com.au/language/chinese/zh-hant/podcast/sbs-cantonese) podcasts from Auguest 2022 to October 2023.
12
- There are 2,519 episodes and each episode is split into segments that are at most 10 seconds long. In total, there are 189,216 segments in this corpus.
13
  Here is a breakdown on the categories of episodes present in this dataset:
14
 
15
  <style>
@@ -66,5 +66,44 @@ Since silero-vad is not trained on Cantonese data, the segmentation is not ideal
66
  Hence, this dataset is not intended to be used for supervised ASR. Instead, it is intended to be used for self-supervised
67
  speech pretraining, like training WavLM, HuBERT, and Wav2Vec.
68
 
 
69
  Each segment is stored as a monochannel FLAC file with a sample rate of 16k Hz. You can find the segments under the `audio/` folder,
70
- where groups of segments are bundled into a .tar.gz file for ease of distribution.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
  # SBS Cantonese Speech Corpus
10
 
11
+ This speech corpus contains **435 hours** of [SBS Cantonese](https://www.sbs.com.au/language/chinese/zh-hant/podcast/sbs-cantonese) podcasts from Auguest 2022 to October 2023.
12
+ There are **2,519 episodes** and each episode is split into segments that are at most 10 seconds long. In total, there are **189,216 segments** in this corpus.
13
  Here is a breakdown on the categories of episodes present in this dataset:
14
 
15
  <style>
 
66
  Hence, this dataset is not intended to be used for supervised ASR. Instead, it is intended to be used for self-supervised
67
  speech pretraining, like training WavLM, HuBERT, and Wav2Vec.
68
 
69
+ ### Format
70
  Each segment is stored as a monochannel FLAC file with a sample rate of 16k Hz. You can find the segments under the `audio/` folder,
71
+ where groups of segments are bundled into a .tar.gz file for ease of distribution.
72
+
73
+ The filename of the segment shows which episodes it belongs to and place of it within that episode:
74
+ For example, here's a filename:
75
+ ```
76
+ 0061gy0w8_0000_5664_81376
77
+ ```
78
+ where
79
+ * `0061gy0w8` is the episode id
80
+ * `0000` means that it is the first segment of that episode
81
+ * `5664` is the starting sample of this segment. Remember all episodes are sampled at 16k Hz, so the total number of samples
82
+ in an episode is (the duration in seconds * 16,000).
83
+ * `81376` is the ending (exclusive) sample of this segment.
84
+
85
+ ### Metadata
86
+
87
+ Metadata for each episode is stored in the `metadata.jsonl` file, where each line stores the metadata for one episode:
88
+ Here's the metadata for one of the episodes (split into multiple lines for clarity):
89
+
90
+ ```json
91
+ {
92
+ "title": "SBS 中文新聞 (7月5日)",
93
+ "date": "05/07/2023",
94
+ "view_more_link": "https://www.sbs.com.au/language/chinese/zh-hant/podcast-episode/chinese-news-5-7-2023/tl6s68rdk",
95
+ "download_link": "https://sbs-podcast.streamguys1.com/sbs-cantonese/20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0.mp3?awCollectionId=sbs-cantonese&awGenre=News&awEpisodeId=20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0"
96
+ }
97
+ ```
98
+ where
99
+ * `title` is the title of the episode
100
+ * `date` is the date when the episode is published
101
+ * `view_more_link` is a link to the associated article/description for this episode.
102
+ Many news episodes have extremely detailed manuscripts written in Traditional Chinese while others have briefer summaries or key points available.
103
+ * `download_link` is the link to download the audio for this episode. It is usually hosted on [streamguys](https://www.streamguys.com/) but some earlier episodes
104
+ are stored SBS's own server at https://images.sbs.com.au.
105
+
106
+ The id of each episode appears at the end of its `view_more_link`. It appears to be a precomputed hash that is unique to each episode.
107
+ ```python
108
+ id = view_more_link.split("/")[-1]
109
+ ```