Datasets:
metadata
language:
- yue
license: cc0-1.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- cantonese
- audio
dataset_info:
features:
- name: file_name
dtype: audio
- name: speaker
dtype: string
- name: language
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 143668702
num_examples: 307
download_size: 131698525
dataset_size: 143668702
張悦楷三國演義
Fork from laubonghaudoi/zoengjyutgaai_saamgwokjinji
We found the original wav files are not splitted correctly, so we asked the author to provide the srt file and un-splitted wav files. We then re-split the wav files and align the srt file to the wav files. We also filtered some samples that are too short.
subtitles = []
splits = librosa.effects.split(audio) # shape: (682, 2)
!mkdir -p dataset/zoengjyutgaai_saamgwokjinji/wavs
# split audio by srt time
for i, sub in enumerate(subs):
chunk_start = sub.start.to_time()
chunk_end = sub.end.to_time()
chunk_start = ((chunk_start.minute * 60) + chunk_start.second) * sr
chunk_end = ((chunk_end.minute * 60) + chunk_end.second) * sr
# Find the closest split
chunk_start = min(splits, key=lambda x: abs(x[0] - chunk_start))[0]
chunk_end = min(splits, key=lambda x: abs(x[1] - chunk_end))[1]
chunk = audio[chunk_start:chunk_end]
wav_file = f"001_{i:03}.wav"
# resample, since bert-vits2 training only support 44.1k
try:
chunk = librosa.resample(chunk, sr, 44100)
except:
print(f"Error resampling {wav_file}")
continue
subtitles.append({ 'path': wav_file, 'speaker': 'zoengjyutgaai', 'language': 'YUE', 'text': sub.text })
# export audio
sf.write(f"dataset/zoengjyutgaai_saamgwokjinji/wavs/{wav_file}", chunk, 44100, subtype='PCM_16')
df = pd.DataFrame(subtitles)
df.to_csv("dataset/zoengjyutgaai_saamgwokjinji/001.csv", index=False, sep='|', header=False)