tobiolatunji
commited on
Commit
•
75fceb6
1
Parent(s):
27520da
add configs for smaller datasets
Browse files- README.md +69 -0
- afrispeech-200.py +64 -7
README.md
CHANGED
@@ -50,6 +50,7 @@ dataset_info:
|
|
50 |
- [Table of Contents](#table-of-contents)
|
51 |
- [Dataset Description](#dataset-description)
|
52 |
- [Dataset Summary](#dataset-summary)
|
|
|
53 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
54 |
- [Languages](#languages)
|
55 |
- [Dataset Structure](#dataset-structure)
|
@@ -88,6 +89,72 @@ dataset_info:
|
|
88 |
AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers.
|
89 |
Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain.
|
90 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
91 |
### Supported Tasks and Leaderboards
|
92 |
|
93 |
- Automatic Speech Recognition
|
@@ -112,6 +179,8 @@ A typical data point comprises the path to the audio file, called `path` and its
|
|
112 |
'transcirpt': 'The patient took the correct medication'}
|
113 |
```
|
114 |
|
|
|
|
|
115 |
### Data Fields
|
116 |
|
117 |
- speaker_id: An id for which speaker (voice) made the recording
|
|
|
50 |
- [Table of Contents](#table-of-contents)
|
51 |
- [Dataset Description](#dataset-description)
|
52 |
- [Dataset Summary](#dataset-summary)
|
53 |
+
- [How to use](#how-to-use)
|
54 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
55 |
- [Languages](#languages)
|
56 |
- [Dataset Structure](#dataset-structure)
|
|
|
89 |
AFRISPEECH-200 is a 200hr Pan-African speech corpus for clinical and general domain English accented ASR; a dataset with 120 African accents from 13 countries and 2,463 unique African speakers.
|
90 |
Our goal is to raise awareness for and advance Pan-African English ASR research, especially for the clinical domain.
|
91 |
|
92 |
+
## How to use
|
93 |
+
|
94 |
+
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
|
95 |
+
|
96 |
+
For example, to download the isizulu config, simply specify the corresponding language config name, list of supported accents provided in accent list section below:
|
97 |
+
```python
|
98 |
+
from datasets import load_dataset
|
99 |
+
|
100 |
+
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
|
101 |
+
```
|
102 |
+
|
103 |
+
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
|
104 |
+
```python
|
105 |
+
from datasets import load_dataset
|
106 |
+
|
107 |
+
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train", streaming=True)
|
108 |
+
|
109 |
+
print(next(iter(afrispeech)))
|
110 |
+
```
|
111 |
+
|
112 |
+
### Local
|
113 |
+
|
114 |
+
```python
|
115 |
+
from datasets import load_dataset
|
116 |
+
from torch.utils.data.sampler import BatchSampler, RandomSampler
|
117 |
+
|
118 |
+
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
|
119 |
+
batch_sampler = BatchSampler(RandomSampler(afrispeech), batch_size=32, drop_last=False)
|
120 |
+
dataloader = DataLoader(afrispeech, batch_sampler=batch_sampler)
|
121 |
+
```
|
122 |
+
|
123 |
+
### Streaming
|
124 |
+
|
125 |
+
```python
|
126 |
+
from datasets import load_dataset
|
127 |
+
from torch.utils.data import DataLoader
|
128 |
+
|
129 |
+
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu", split="train")
|
130 |
+
dataloader = DataLoader(afrispeech, batch_size=32)
|
131 |
+
```
|
132 |
+
|
133 |
+
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
|
134 |
+
|
135 |
+
### Example scripts
|
136 |
+
|
137 |
+
Train your own CTC or Seq2Seq Automatic Speech Recognition models on AfriSpeech-200 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
|
138 |
+
|
139 |
+
AfriSpeech-200 can be downloaded and used as follows:
|
140 |
+
|
141 |
+
```py
|
142 |
+
from datasets import load_dataset
|
143 |
+
|
144 |
+
afrispeech = load_dataset("tobiolatunji/afrispeech-200", "isizulu") # for isizulu,
|
145 |
+
# to download all data for multi-accent fine-tuning uncomment following line
|
146 |
+
# afrispeech = load_dataset("tobiolatunji/afrispeech-200", "all")
|
147 |
+
|
148 |
+
# see structure
|
149 |
+
print(afrispeech)
|
150 |
+
|
151 |
+
# load audio sample on the fly
|
152 |
+
audio_input = afrispeech["train"][0]["audio"] # audio bytes
|
153 |
+
transcript = afrispeech["train"][0]["transcript"] # transcript
|
154 |
+
|
155 |
+
# use audio_input and text transcript to fine-tune your model for audio classification
|
156 |
+
```
|
157 |
+
|
158 |
### Supported Tasks and Leaderboards
|
159 |
|
160 |
- Automatic Speech Recognition
|
|
|
179 |
'transcirpt': 'The patient took the correct medication'}
|
180 |
```
|
181 |
|
182 |
+
|
183 |
+
|
184 |
### Data Fields
|
185 |
|
186 |
- speaker_id: An id for which speaker (voice) made the recording
|
afrispeech-200.py
CHANGED
@@ -35,6 +35,30 @@ Our goal is to raise awareness for and advance Pan-African English ASR research,
|
|
35 |
especially for the clinical domain.
|
36 |
"""
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
_HOMEPAGE = "https://github.com/intron-innovation/AfriSpeech-Dataset-Paper"
|
39 |
|
40 |
_LICENSE = "http://creativecommons.org/licenses/by-nc-sa/4.0/"
|
@@ -51,10 +75,35 @@ _SHARDS = {
|
|
51 |
'test': 4
|
52 |
}
|
53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
|
55 |
class AfriSpeech(datasets.GeneratorBasedBuilder):
|
56 |
DEFAULT_WRITER_BATCH_SIZE = 1000
|
57 |
-
VERSION = datasets.Version("1.
|
|
|
58 |
|
59 |
def _info(self):
|
60 |
description = _DESCRIPTION
|
@@ -93,7 +142,13 @@ class AfriSpeech(datasets.GeneratorBasedBuilder):
|
|
93 |
# with the url replaced with path to local files.
|
94 |
# By default the archives will be extracted and a path to a cached folder
|
95 |
# where they are extracted is returned instead of the archive
|
96 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
n_shards = _SHARDS
|
98 |
|
99 |
audio_urls = {}
|
@@ -139,14 +194,16 @@ class AfriSpeech(datasets.GeneratorBasedBuilder):
|
|
139 |
with open(meta_path, "r", encoding="utf-8") as f:
|
140 |
reader = csv.DictReader(f)
|
141 |
for row in tqdm(reader, desc="Reading metadata..."):
|
142 |
-
row[
|
143 |
-
|
144 |
-
|
145 |
-
|
|
|
146 |
|
147 |
for i, audio_archive in enumerate(archives):
|
148 |
for filename, file in audio_archive:
|
149 |
-
_, filename = os.path.split(filename)
|
|
|
150 |
if filename in metadata:
|
151 |
result = dict(metadata[filename])
|
152 |
# set the audio feature and the path to the extracted file
|
|
|
35 |
especially for the clinical domain.
|
36 |
"""
|
37 |
|
38 |
+
_ALL_CONFIGS = [
|
39 |
+
'yoruba', 'igbo', 'swahili', 'ijaw', 'xhosa', 'twi', 'luhya',
|
40 |
+
'igala', 'urhobo', 'hausa', 'kiswahili', 'zulu', 'isizulu',
|
41 |
+
'venda and xitsonga', 'borana', 'afrikaans', 'setswana', 'idoma',
|
42 |
+
'izon', 'chichewa', 'ebira', 'tshivenda', 'isixhosa',
|
43 |
+
'kinyarwanda', 'tswana', 'luganda', 'luo', 'venda', 'dholuo',
|
44 |
+
'akan (fante)', 'sepedi', 'kikuyu', 'isindebele',
|
45 |
+
'luganda and kiswahili', 'akan', 'sotho', 'south african english',
|
46 |
+
'sesotho', 'swahili ,luganda ,arabic', 'shona', 'damara',
|
47 |
+
'southern sotho', 'luo, swahili', 'ateso', 'meru', 'siswati',
|
48 |
+
'portuguese', 'esan', 'nasarawa eggon', 'ibibio', 'isoko',
|
49 |
+
'pidgin', 'alago', 'nembe', 'ngas', 'kagoma', 'ikwere', 'fulani',
|
50 |
+
'bette', 'efik', 'edo', 'hausa/fulani', 'bekwarra', 'epie',
|
51 |
+
'afemai', 'benin', 'nupe', 'tiv', 'okrika', 'etsako', 'ogoni',
|
52 |
+
'kubi', 'gbagyi', 'brass', 'oklo', 'ekene', 'ika', 'berom', 'jaba',
|
53 |
+
'itsekiri', 'ukwuani', 'yala mbembe', 'afo', 'english', 'ebiobo',
|
54 |
+
'igbo and yoruba', 'okirika', 'kalabari', 'ijaw(nembe)', 'anaang',
|
55 |
+
'eggon', 'bini', 'yoruba, hausa', 'ekpeye', 'bajju', 'kanuri',
|
56 |
+
'delta', 'khana', 'ogbia', 'mada', 'mwaghavul', 'angas', 'ikulu',
|
57 |
+
'eleme', 'igarra', 'etche', 'agatu', 'bassa', 'jukun', 'urobo',
|
58 |
+
'ibani', 'obolo', 'idah', 'eket', 'nyandang', 'estako', 'ishan',
|
59 |
+
'bassa-nge/nupe', 'bagi', 'gerawa'
|
60 |
+
]
|
61 |
+
|
62 |
_HOMEPAGE = "https://github.com/intron-innovation/AfriSpeech-Dataset-Paper"
|
63 |
|
64 |
_LICENSE = "http://creativecommons.org/licenses/by-nc-sa/4.0/"
|
|
|
75 |
'test': 4
|
76 |
}
|
77 |
|
78 |
+
class AfriSpeechConfig(datasets.BuilderConfig):
|
79 |
+
"""BuilderConfig for afrispeech"""
|
80 |
+
|
81 |
+
def __init__(
|
82 |
+
self, name, description, homepage, data_url
|
83 |
+
):
|
84 |
+
super(AfriSpeechConfig, self).__init__(
|
85 |
+
name=self.name,
|
86 |
+
version=datasets.Version("1.0.0", ""),
|
87 |
+
description=self.description,
|
88 |
+
)
|
89 |
+
self.name = name
|
90 |
+
self.description = description
|
91 |
+
self.homepage = homepage
|
92 |
+
self.data_url = data_url
|
93 |
+
|
94 |
+
|
95 |
+
def _build_config(name):
|
96 |
+
return AfriSpeechConfig(
|
97 |
+
name=name,
|
98 |
+
description=_DESCRIPTION,
|
99 |
+
homepage=_HOMEPAGE_URL,
|
100 |
+
data_url=_DATA_URL,
|
101 |
+
)
|
102 |
|
103 |
class AfriSpeech(datasets.GeneratorBasedBuilder):
|
104 |
DEFAULT_WRITER_BATCH_SIZE = 1000
|
105 |
+
VERSION = datasets.Version("1.0.0")
|
106 |
+
BUILDER_CONFIGS = [_build_config(name) for name in _ALL_CONFIGS + ["all"]]
|
107 |
|
108 |
def _info(self):
|
109 |
description = _DESCRIPTION
|
|
|
142 |
# with the url replaced with path to local files.
|
143 |
# By default the archives will be extracted and a path to a cached folder
|
144 |
# where they are extracted is returned instead of the archive
|
145 |
+
|
146 |
+
langs = (
|
147 |
+
_ALL_CONFIGS
|
148 |
+
if self.config.name == "all"
|
149 |
+
else [self.config.name]
|
150 |
+
)
|
151 |
+
|
152 |
n_shards = _SHARDS
|
153 |
|
154 |
audio_urls = {}
|
|
|
194 |
with open(meta_path, "r", encoding="utf-8") as f:
|
195 |
reader = csv.DictReader(f)
|
196 |
for row in tqdm(reader, desc="Reading metadata..."):
|
197 |
+
if (row['accent'] == self.config.name) or (self.config.name == 'all'):
|
198 |
+
row["speaker_id"] = row["user_ids"]
|
199 |
+
audio_id = "/".join(row["audio_paths"].split("/")[-2:])
|
200 |
+
# if data is incomplete, fill with empty values
|
201 |
+
metadata[audio_id] = {field: row.get(field, "") for field in data_fields}
|
202 |
|
203 |
for i, audio_archive in enumerate(archives):
|
204 |
for filename, file in audio_archive:
|
205 |
+
# _, filename = os.path.split(filename)
|
206 |
+
filename = "/".join(filename.split("/")[-2:])
|
207 |
if filename in metadata:
|
208 |
result = dict(metadata[filename])
|
209 |
# set the audio feature and the path to the extracted file
|