metadata
dataset_info:
features:
- name: id
dtype: string
- name: channel
dtype: string
- name: transcript_whisper
dtype: string
- name: title
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcript_sensevoice
dtype: string
- name: emotion_sensevoice
sequence: string
- name: event_sensevoice
sequence: string
- name: c50
dtype: string
- name: snr
dtype: string
- name: speech_duration
dtype: string
- name: emotion_emotion2vec
dtype: string
splits:
- name: train
num_bytes: 544878197109.913
num_examples: 1478337
download_size: 527078747956
dataset_size: 544878197109.913
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
An earlier version of the dataset contains some duplicated data, a fix is on the way. Sorry for the inconvenience
Cantonese Youtube Pseudo-Transcription Dataset
- Contains approximately 10k hours of audio sourced from YouTube
- Videos are chosen at random, and scraped on a channel basis
- Includes news, vlogs, entertainment, stories, health
- Columns
transcript_whisper
: Transcribed usingScrya/whisper-large-v2-cantonese
withalvanlii/whisper-small-cantonese
for speculative decodingtranscript_sensevoice
: Transcribed usingFunAudioLLM/SenseVoiceSmall
- used OpenCC to convert to traditional chinese
- isolated event tags to
event_sensevoice
- isolated emotion tags to
emotion_sensevoice
snr
: Signal-to-noise ratio, extracted fromylacombe/brouhaha-best
c50
: Speech clarity, extracted fromylacombe/brouhaha-best
emotion
: Emotion, extracted fromemotion2vec/emotion2vec_plus_large
- Processing
- The full audio is split using WhisperX, using
Scrya/whisper-large-v2-cantonese
- Preliminary filtering includes filtering out phrases like:
liking/subscribing to YouTube channel
subtitles by [xxxx]
- Additional filtering is recommended for your own use
- The full audio is split using WhisperX, using