File size: 2,106 Bytes
7342dfd 4feb0c4 f10f366 4feb0c4 7342dfd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 |
---
dataset_info:
features:
- name: audio
dtype: audio
- name: gender
dtype: string
- name: speaker_id
dtype: string
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 422057259.808
num_examples: 3792
download_size: 436738370
dataset_size: 422057259.808
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Corpus
This dataset is built from Magicdata [ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/shanghai-dialect-conversational-speech-corpus/)
This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
# Usage
To load this dataset, use
```python
from datasets import load_dataset
dialect_corpus = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus")
```
This dataset only has train split. To split out a test split, use
```python
from datasets import load_dataset
train_split = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus", split="train")
# where test_size=0.5 denotes 0.5 of the dataset will be split to test split
corpus = train_split.train_test_split(test_size=0.5)
```
A sample data would be
```python
# note this data is from the Nanchang Dialect corpus, the data format is shared
{'audio':
{'path': 'A0001_S001_0_G0001_0.WAV',
'array': array([-0.00030518, -0.00039673,
-0.00036621, ..., -0.00064087,
-0.00015259, -0.00042725]),
'sampling_rate': 16000},
'gender': '女',
'speaker_id': 'G0001',
'transcription': '北京爱数智慧语音采集'
}
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |