Datasets:

Languages:
Chinese
Multilinguality:
monolingual
Size Categories:
10K<n<100K
Language Creators:
crowdsourced
Annotations Creators:
expert-generated
Source Datasets:
original
License:
Jaegeon commited on
Commit
3d3615f
1 Parent(s): e2c213e

Initial commit

Browse files
Files changed (2) hide show
  1. README.md +168 -0
  2. mmcrsc.py +157 -0
README.md ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language:
5
+ - zh
6
+ language_creators:
7
+ - crowdsourced
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: MAGICDATA_Mandarin_Chinese_Read_Speech_Corpus
13
+ size_categories:
14
+ - 10K<n<100K
15
+ source_datasets:
16
+ - original
17
+ tags: []
18
+ task_categories:
19
+ - automatic-speech-recognition
20
+ task_ids: []
21
+ ---
22
+
23
+ # Dataset Card for MMCRSC
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** [MAGICDATA Mandarin Chinese Read Speech Corpus](https://openslr.org/68/)
53
+ - **Repository:**
54
+ - **Paper:**
55
+ - **Leaderboard:**
56
+ - **Point of Contact:**
57
+
58
+ ### Dataset Summary
59
+
60
+ MAGICDATA Mandarin Chinese Read Speech Corpus was developed by MAGIC DATA Technology Co., Ltd. and freely published for non-commercial use.
61
+ The contents and the corresponding descriptions of the corpus include:
62
+
63
+ The corpus contains 755 hours of speech data, which is mostly mobile recorded data.
64
+ 1080 speakers from different accent areas in China are invited to participate in the recording.
65
+ The sentence transcription accuracy is higher than 98%.
66
+ Recordings are conducted in a quiet indoor environment.
67
+ The database is divided into training set, validation set, and testing set in a ratio of 51: 1: 2.
68
+ Detail information such as speech data coding and speaker information is preserved in the metadata file.
69
+ The domain of recording texts is diversified, including interactive Q&A, music search, SNS messages, home command and control, etc.
70
+ Segmented transcripts are also provided.
71
+ The corpus aims to support researchers in speech recognition, machine translation, speaker recognition, and other speech-related fields. Therefore, the corpus is totally free for academic use.
72
+ The corpus is a subset of a much bigger data ( 10566.9 hours Chinese Mandarin Speech Corpus ) set which was recorded in the same environment. Please feel free to contact us via business@magicdatatech.com for more details.
73
+
74
+ ### Supported Tasks and Leaderboards
75
+
76
+ [More Information Needed]
77
+
78
+ ### Languages
79
+
80
+ zh-CN
81
+
82
+ ## Dataset Structure
83
+
84
+ ### Data Instances
85
+
86
+ ```json
87
+ {
88
+ 'file': '14_3466_20170826171404.wav',
89
+ 'audio': {
90
+ 'path': '14_3466_20170826171404.wav',
91
+ 'array': array([0., 0., 0., ..., 0., 0., 0.]),
92
+ 'sampling_rate': 16000
93
+ },
94
+ 'text': '请搜索我附近的超市',
95
+ 'speaker_id': 143466,
96
+ 'id': '14_3466_20170826171404.wav'
97
+ }
98
+ ```
99
+
100
+ ### Data Fields
101
+
102
+ - file: A path to the downloaded audio file in .wav format.
103
+ - audio: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
104
+ - text: the transcription of the audio file.
105
+ - id: unique id of the data sample.
106
+ - speaker_id: unique id of the speaker. The same speaker id can be found for multiple data samples.
107
+
108
+ ### Data Splits
109
+
110
+ [More Information Needed]
111
+
112
+ ## Dataset Creation
113
+
114
+ ### Curation Rationale
115
+
116
+ [More Information Needed]
117
+
118
+ ### Source Data
119
+
120
+ #### Initial Data Collection and Normalization
121
+
122
+ [More Information Needed]
123
+
124
+ #### Who are the source language producers?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Annotations
129
+
130
+ #### Annotation process
131
+
132
+ [More Information Needed]
133
+
134
+ #### Who are the annotators?
135
+
136
+ [More Information Needed]
137
+
138
+ ### Personal and Sensitive Information
139
+
140
+ [More Information Needed]
141
+
142
+ ## Considerations for Using the Data
143
+
144
+ ### Social Impact of Dataset
145
+
146
+ [More Information Needed]
147
+
148
+ ### Discussion of Biases
149
+
150
+ [More Information Needed]
151
+
152
+ ### Other Known Limitations
153
+
154
+ [More Information Needed]
155
+
156
+ ## Additional Information
157
+
158
+ ### Dataset Curators
159
+
160
+ [More Information Needed]
161
+
162
+ ### Licensing Information
163
+
164
+ [More Information Needed]
165
+
166
+ ### Citation Information
167
+
168
+ Please cite the corpus as "Magic Data Technology Co., Ltd., "http://www.imagicdatatech.com/index.php/home/dataopensource/data_info/id/101", 05/2019".
mmcrsc.py ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ # Lint as: python3
17
+ """MAGICDATA Mandarin Chinese Read Speech Corpus."""
18
+
19
+
20
+ import os
21
+
22
+ import datasets
23
+ from datasets.tasks import AutomaticSpeechRecognition
24
+
25
+
26
+ _CITATION = """\
27
+ @misc{magicdata_2019,
28
+ title={MAGICDATA Mandarin Chinese Read Speech Corpus},
29
+ url={https://openslr.org/68/},
30
+ publisher={Magic Data Technology Co., Ltd.},
31
+ year={2019},
32
+ month={May}}
33
+ """
34
+
35
+ _DESCRIPTION = """\
36
+ The corpus by Magic Data Technology Co., Ltd. , containing 755 hours of scripted read speech data
37
+ from 1080 native speakers of the Mandarin Chinese spoken in mainland China.
38
+ The sentence transcription accuracy is higher than 98%.
39
+ """
40
+
41
+ _URL = "https://openslr.org/68/"
42
+ _DL_URL = "http://www.openslr.org/resources/68/"
43
+
44
+
45
+ _DL_URLS = {
46
+ "train": _DL_URL + "train_set.tar.gz",
47
+ "dev": _DL_URL + "dev_set.tar.gz",
48
+ "test": _DL_URL + "test_set.tar.gz",
49
+ }
50
+
51
+
52
+ class MMCRSCConfig(datasets.BuilderConfig):
53
+ """BuilderConfig for MMCRSC."""
54
+
55
+ def __init__(self, **kwargs):
56
+ """
57
+ Args:
58
+ data_dir: `string`, the path to the folder containing the files in the
59
+ downloaded .tar
60
+ citation: `string`, citation for the data set
61
+ url: `string`, url for information about the data set
62
+ **kwargs: keyword arguments forwarded to super.
63
+ """
64
+ # version history
65
+ # 0.1.0: First release on Huggingface
66
+ super(MMCRSCConfig, self).__init__(version=datasets.Version("0.1.0", ""), **kwargs)
67
+
68
+
69
+ class MMCRSC(datasets.GeneratorBasedBuilder):
70
+ """MMCRSC dataset."""
71
+
72
+ DEFAULT_WRITER_BATCH_SIZE = 256
73
+ DEFAULT_CONFIG_NAME = "all"
74
+
75
+ def _info(self):
76
+ return datasets.DatasetInfo(
77
+ description=_DESCRIPTION,
78
+ features=datasets.Features(
79
+ {
80
+ "file": datasets.Value("string"),
81
+ "audio": datasets.Audio(sampling_rate=16_000),
82
+ "text": datasets.Value("string"),
83
+ "speaker_id": datasets.Value("int64"),
84
+ "id": datasets.Value("string"),
85
+ }
86
+ ),
87
+ supervised_keys=("file", "text"),
88
+ homepage=_URL,
89
+ citation=_CITATION,
90
+ task_templates=[AutomaticSpeechRecognition(audio_column="audio", transcription_column="text")],
91
+ )
92
+
93
+ def _split_generators(self, dl_manager):
94
+ archive_path = dl_manager.download(_DL_URLS)
95
+ # (Optional) In non-streaming mode, we can extract the archive locally to have actual local audio files:
96
+ local_extracted_archive = dl_manager.extract(archive_path) if not dl_manager.is_streaming else {}
97
+
98
+ return [
99
+ datasets.SplitGenerator(
100
+ name=datasets.Split.TRAIN,
101
+ gen_kwargs={
102
+ "local_extracted_archive": local_extracted_archive.get("train"),
103
+ "files": dl_manager.iter_archive(archive_path["train"]),
104
+ },
105
+ ),
106
+ datasets.SplitGenerator(
107
+ name=datasets.Split.VALIDATION,
108
+ gen_kwargs={
109
+ "local_extracted_archive": local_extracted_archive.get("dev"),
110
+ "files": dl_manager.iter_archive(archive_path["dev"]),
111
+ },
112
+ ),
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TEST,
115
+ gen_kwargs={
116
+ "local_extracted_archive": local_extracted_archive.get("test"),
117
+ "files": dl_manager.iter_archive(archive_path["test"]),
118
+ },
119
+ ),
120
+ ]
121
+
122
+ def _generate_examples(self, files, local_extracted_archive):
123
+ """Generate examples from a LibriSpeech archive_path."""
124
+ audio_data = {}
125
+ transcripts = []
126
+ for path, f in files:
127
+ if path.endswith(".wav"):
128
+ id_ = path.split("/")[-1]
129
+ audio_data[id_] = f.read()
130
+ elif path.endswith("TRANS.txt"):
131
+ for line in f:
132
+ if line and (b'.wav' in line):
133
+ line = line.decode("utf-8").strip()
134
+ id_, speaker_id, transcript = line.split("\t")
135
+ audio_file = id_
136
+ audio_file = (
137
+ os.path.join(local_extracted_archive, audio_file)
138
+ if local_extracted_archive
139
+ else audio_file
140
+ )
141
+ transcripts.append(
142
+ {
143
+ "id": id_,
144
+ "speaker_id": speaker_id,
145
+ "file": audio_file,
146
+ "text": transcript,
147
+ }
148
+ )
149
+ if audio_data:
150
+ for key, transcript in enumerate(transcripts):
151
+ if transcript["id"] in audio_data:
152
+ audio = {"path": transcript["file"], "bytes": audio_data[transcript["id"]]}
153
+ yield key, {"audio": audio, **transcript}
154
+ audio_data = {}
155
+ transcripts = []
156
+
157
+