pain commited on
Commit
99bfbcb
1 Parent(s): cff6e0f

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license:
3
+ - cc-by-4.0
4
+ size_categories:
5
+ ar:
6
+ - n==1k
7
+ task_categories:
8
+ - automatic-speech-recognition
9
+ task_ids: []
10
+ pretty_name: MASC dataset
11
+ extra_gated_prompt: >-
12
+ By clicking on “Access repository” below, you also agree to not attempt to
13
+ determine the identity of speakers in the MASC dataset.
14
+ language:
15
+ - ar
16
+ ---
17
+
18
+ # Dataset Card for Common Voice Corpus 11.0
19
+
20
+ ## Table of Contents
21
+ - [Dataset Description](#dataset-description)
22
+ - [Dataset Summary](#dataset-summary)
23
+ - [Languages](#languages)
24
+ - [How to use](#how-to-use)
25
+ - [Dataset Structure](#dataset-structure)
26
+ - [Data Instances](#data-instances)
27
+ - [Data Fields](#data-fields)
28
+ - [Data Splits](#data-splits)
29
+ - [Additional Information](#additional-information)
30
+ - [Citation Information](#citation-information)
31
+
32
+ ## Dataset Description
33
+
34
+ - **Homepage:** https://ieee-dataport.org/open-access/masc-massive-arabic-speech-corpus
35
+ - **Paper:** https://ieeexplore.ieee.org/document/10022652
36
+
37
+ ### Dataset Summary
38
+
39
+ MASC is a dataset that contains 1,000 hours of speech sampled at 16 kHz and crawled from over 700 YouTube channels.
40
+ The dataset is multi-regional, multi-genre, and multi-dialect intended to advance the research and development of Arabic speech technology with a special emphasis on Arabic speech recognition.
41
+
42
+
43
+ ### Supported Tasks
44
+
45
+ - Automatics Speach Recognition
46
+
47
+ ### Languages
48
+
49
+ ```
50
+ Arabic
51
+ ```
52
+
53
+ ## How to use
54
+
55
+ The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
56
+
57
+ ```python
58
+ from datasets import load_dataset
59
+
60
+ masc = load_dataset("pain/MASC", split="train")
61
+ ```
62
+
63
+ Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
64
+ ```python
65
+ from datasets import load_dataset
66
+
67
+ masc = load_dataset("pain/MASC", split="train", streaming=True)
68
+
69
+ print(next(iter(masc)))
70
+ ```
71
+
72
+ *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
73
+
74
+ ### Local
75
+
76
+ ```python
77
+ from datasets import load_dataset
78
+ from torch.utils.data.sampler import BatchSampler, RandomSampler
79
+
80
+ masc = load_dataset("pain/MASC", split="train")
81
+ batch_sampler = BatchSampler(RandomSampler(masc), batch_size=32, drop_last=False)
82
+ dataloader = DataLoader(masc, batch_sampler=batch_sampler)
83
+ ```
84
+
85
+ ### Streaming
86
+
87
+ ```python
88
+ from datasets import load_dataset
89
+ from torch.utils.data import DataLoader
90
+
91
+ masc = load_dataset("pain/MASC", split="train")
92
+ dataloader = DataLoader(masc, batch_size=32)
93
+ ```
94
+
95
+ To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
96
+
97
+ ### Example scripts
98
+
99
+ Train your own CTC or Seq2Seq Automatic Speech Recognition models on MASC with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
100
+
101
+ ## Dataset Structure
102
+
103
+ ### Data Instances
104
+
105
+ A typical data point comprises the `path` to the audio file and its `sentence`.
106
+
107
+ ```python
108
+ {'video_id': 'OGqz9G-JO0E', 'start': 770.6, 'end': 781.835, 'duration': 11.24,
109
+ 'text': 'اللهم من ارادنا وبلادنا وبلاد المسلمين بسوء اللهم فاشغله في نفسه ورد كيده في نحره واجعل تدبيره تدميره يا رب العالمين',
110
+ 'type': 'c', 'file_path': '87edeceb-5349-4210-89ad-8c3e91e54062_OGqz9G-JO0E.wav',
111
+ 'audio': {'path': None,
112
+ 'array': array([
113
+ 0.05938721,
114
+ 0.0539856,
115
+ 0.03460693, ...,
116
+ 0.00393677,
117
+ 0.01745605,
118
+ 0.03045654
119
+ ]), 'sampling_rate': 16000
120
+ }
121
+ }
122
+ ```
123
+
124
+ ### Data Fields
125
+
126
+ `video_id` (`string`): An id for the video that the voice has been created from
127
+
128
+ `start` (`float64`): The start of the audio's chunk
129
+
130
+ `end` (`float64`): The end of the audio's chunk
131
+
132
+ `duration` (`float64`): The duration of the chunk
133
+
134
+ `text` (`string`): The text of the chunk
135
+
136
+ `audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
137
+
138
+ `type` (`string`): It refers to the data set type, either clean or noisy where "c: clean and n: noisy"
139
+
140
+ 'file_path' (`string`): A path for the audio chunk
141
+
142
+ "audio" ("audio"): Audio for the chunk
143
+
144
+ ### Data Splits
145
+
146
+ The speech material has been subdivided into portions for train, dev, test.
147
+
148
+ The dataset splits has clean and noisy data that can be determined by type field.
149
+
150
+
151
+
152
+
153
+ ### Citation Information
154
+
155
+ ```
156
+ @INPROCEEDINGS{10022652,
157
+ author={Al-Fetyani, Mohammad and Al-Barham, Muhammad and Abandah, Gheith and Alsharkawi, Adham and Dawas, Maha},
158
+ booktitle={2022 IEEE Spoken Language Technology Workshop (SLT)},
159
+ title={MASC: Massive Arabic Speech Corpus},
160
+ year={2023},
161
+ volume={},
162
+ number={},
163
+ pages={1006-1013},
164
+ doi={10.1109/SLT54892.2023.10022652}}
165
+ }
166
+ ```