TingChen-ppmc commited on
Commit
4feb0c4
1 Parent(s): 7342dfd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -1
README.md CHANGED
@@ -21,6 +21,48 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
24
- # Dataset Card for "Shanghai_Dialect_Conversational_Speech_Corpus"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+ # Corpus
25
+
26
+ This dataset is built from Magicdata [ASR-CZDIACSC: A CHINESE SHANGHAI DIALECT CONVERSATIONAL SPEECH CORPUS](https://magichub.com/datasets/shanghai-dialect-conversational-speech-corpus/)
27
+
28
+ This corpus is licensed under a [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License](http://creativecommons.org/licenses/by-nc-nd/4.0/). Please refer to the license for further information.
29
+
30
+ Modifications: The audio is split in sentences based on the time span on the transcription file. Sentences that span less than 1 second is discarded. Topics of conversation is removed.
31
+
32
+ # Usage
33
+
34
+ To load this dataset, use
35
+
36
+ ```python
37
+ from datasets import load_dataset
38
+ dialect_corpus = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus")
39
+ ```
40
+
41
+ This dataset only has train split. To split out a test split, use
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+ train_split = load_dataset("TingChen-ppmc/Shanghai_Dialect_Conversational_Speech_Corpus", split="train")
46
+ # where test=0.5 denotes 0.5 of the dataset will be split to test split
47
+ corpus = train_split.train_test_split(test=0.5)
48
+ ```
49
+
50
+ A sample data would be
51
+
52
+ ```python
53
+ # note this data is from the Nanchang Dialect corpus, the data format is shared
54
+ {'audio':
55
+ {'path': 'A0001_S001_0_G0001_0.WAV',
56
+ 'array': array([-0.00030518, -0.00039673,
57
+ -0.00036621, ..., -0.00064087,
58
+ -0.00015259, -0.00042725]),
59
+ 'sampling_rate': 16000},
60
+ 'gender': '女',
61
+ 'speaker_id': 'G0001',
62
+ 'transcription': '北京爱数智慧语音采集'
63
+ }
64
+ ```
65
+
66
+
67
 
68
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)