cszs_zh_en / README.md
zenyn's picture
Update README.md
8c05b9b verified
---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
dataset_info:
features:
- name: correct_audio
dtype:
audio:
sampling_rate: 16000
- name: correct_transcription
dtype: string
- name: correct_file
dtype: string
- name: wrong_audio
dtype:
audio:
sampling_rate: 16000
- name: wrong_transcription
dtype: string
- name: wrong_file
dtype: string
splits:
- name: train
num_bytes: 7561544424.98
num_examples: 23549
- name: dev
num_bytes: 2665949331.86
num_examples: 8505
- name: test
num_bytes: 929488114.48
num_examples: 3176
download_size: 10860817060
dataset_size: 11156981871.32
license: mit
language:
- zh
- en
---
This dataset contains the Mandarin-English track of the benchmark from ICASSP 2024: Zero Resource Code-Switched Speech Benchmark Using Speech Utterance Pairs for Multiple Spoken Languages.
Though the benchmark is originally designed to assess the semantic and syntactic abilities of the speech foundation models, you can also use this dataset for code-switching ASR.
If you find this dataset helpful, please consider to cite the following paper:
```
@INPROCEEDINGS{10446737,
author={Huang, Kuan-Po and Yang, Chih-Kai and Fu, Yu-Kuan and Dunbar, Ewan and Lee, Hung-Yi},
booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={Zero Resource Code-Switched Speech Benchmark Using Speech Utterance Pairs for Multiple Spoken Languages},
year={2024},
volume={},
number={},
pages={10006-10010},
keywords={Speech coding;Benchmark testing;Signal processing;Linguistics;Acoustics;Speech processing;Task analysis;Code-switch;Multilingual;Discrete unit;Zero resource;Self-supervised},
doi={10.1109/ICASSP48485.2024.10446737}}
```