mnazari commited on
Commit
24c1474
1 Parent(s): 9536d66
Files changed (1) hide show
  1. README.md +63 -90
README.md CHANGED
@@ -34,61 +34,27 @@ extra_gated_prompt: >-
34
 
35
  ## Dataset Description
36
 
37
- The Northeastern Neo-Aramaic (NENA) dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities in northern Iraq, northwestern Iran, and southeastern Türkiye.
38
 
39
- NENA Speech is a multimodal dataset of audio, transcription, and translation.
 
 
40
 
41
- <!-- The [Northeastern Neo-Aramaic Database Project](https://nena.ames.cam.ac.uk/), lead by Geoffrey Khan, has been creating language documentation materials for the NENA dialects. These materials include corpora of oral literatures. These oral literatures are being parsed [crowdsource.nenadb.dev](https://crowdsource.nenadb.dev/) allows the community to directly engage with these parsed examples and contribute their own voices to the database. -->
42
 
43
  ## How to use
44
 
45
  The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
46
 
47
- For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
48
- ```python
49
- from datasets import load_dataset
50
-
51
- cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
52
- ```
53
-
54
- Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
55
- ```python
56
- from datasets import load_dataset
57
-
58
- cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train", streaming=True)
59
-
60
- print(next(iter(cv_13)))
61
- ```
62
-
63
- *Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
64
-
65
- ### Local
66
-
67
- ```python
68
- from datasets import load_dataset
69
- from torch.utils.data.sampler import BatchSampler, RandomSampler
70
-
71
- cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
72
- batch_sampler = BatchSampler(RandomSampler(cv_13), batch_size=32, drop_last=False)
73
- dataloader = DataLoader(cv_13, batch_sampler=batch_sampler)
74
- ```
75
-
76
- ### Streaming
77
 
78
  ```python
79
  from datasets import load_dataset
80
- from torch.utils.data import DataLoader
81
 
82
- cv_13 = load_dataset("mozilla-foundation/common_voice_13_0", "hi", split="train")
83
- dataloader = DataLoader(cv_13, batch_size=32)
84
  ```
85
 
86
  To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
87
 
88
- ### Example scripts
89
-
90
- Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 13 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
91
-
92
  ## Dataset Structure
93
 
94
  ### Data Instances
@@ -155,33 +121,37 @@ The other data is data that has not yet been reviewed.
155
 
156
  The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
157
 
158
- ## Data Preprocessing Recommended by Hugging Face
159
 
160
- The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
 
 
 
161
 
162
- Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
 
 
 
 
 
 
 
 
 
 
 
163
 
164
  In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
165
 
166
  ```python
167
  from datasets import load_dataset
168
 
169
- ds = load_dataset("mozilla-foundation/common_voice_13_0", "en", use_auth_token=True)
170
 
171
  def prepare_dataset(batch):
172
- """Function to preprocess the dataset with the .map method"""
173
- transcription = batch["sentence"]
174
-
175
- if transcription.startswith('"') and transcription.endswith('"'):
176
- # we can remove trailing quotation marks as they do not affect the transcription
177
- transcription = transcription[1:-1]
178
-
179
- if transcription[-1] not in [".", "?", "!"]:
180
- # append a full-stop to sentences that do not end in punctuation
181
- transcription = transcription + "."
182
-
183
- batch["sentence"] = transcription
184
-
185
  return batch
186
 
187
  ds = ds.map(prepare_dataset, desc="preprocess dataset")
@@ -201,7 +171,22 @@ ds = ds.map(prepare_dataset, desc="preprocess dataset")
201
 
202
  #### Who are the source language producers?
203
 
204
- [Needs More Information]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
 
206
  ### Annotations
207
 
@@ -215,21 +200,33 @@ ds = ds.map(prepare_dataset, desc="preprocess dataset")
215
 
216
  ### Personal and Sensitive Information
217
 
218
- The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
219
 
220
  ## Considerations for Using the Data
221
 
222
  ### Social Impact of Dataset
223
 
224
- The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
225
 
226
  ### Discussion of Biases
227
 
228
- [More Information Needed]
229
-
230
- ### Other Known Limitations
231
-
232
- [More Information Needed]
233
 
234
  ## Additional Information
235
 
@@ -239,28 +236,4 @@ Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/
239
 
240
  ### Citation Information
241
 
242
- ```
243
- @inproceedings{commonvoice:2020,
244
- author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
245
- title = {Common Voice: A Massively-Multilingual Speech Corpus},
246
- booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
247
- pages = {4211--4215},
248
- year = 2020
249
- }
250
- ```
251
-
252
- ## Development
253
-
254
- ### Building the dataset
255
-
256
- Install the required packages.
257
-
258
- ```
259
- pip install -r requirements.txt
260
- ```
261
-
262
- Build the dataset.
263
-
264
- ```
265
- python build.py --build
266
- ```
 
34
 
35
  ## Dataset Description
36
 
37
+ NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.
38
 
39
+ The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northern Iraq, northwestern Iran, and southeastern Türkiye.
40
+
41
+ NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.
42
 
 
43
 
44
  ## How to use
45
 
46
  The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
47
 
48
+ For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
  ```python
51
  from datasets import load_dataset
 
52
 
53
+ nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
 
54
  ```
55
 
56
  To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
57
 
 
 
 
 
58
  ## Dataset Structure
59
 
60
  ### Data Instances
 
121
 
122
  The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
123
 
124
+ ## Data Preprocessing and Note about Orthography
125
 
126
+ The data is multimodal, which means examples fall into three kinds
127
+ 1. Unlabelled speech: These kinds of examples contain audio but no accompanying transcription or translation. This is useful for machine learning tasks like representation learning.
128
+ 2. Transcribed speech: these kinds of examples contain audio and transcription. This is useful for machine learning tasks like automatic speech recognition and speech synthesis.
129
+ 3. Transcribed and translated speech: these kinds of examples contain audio, transcription, and translation. These is useful for tasks like multimodal translation.
130
 
131
+ If you want to
132
+
133
+ ```python
134
+ from datasets import load_dataset
135
+
136
+ ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
137
+
138
+ def filter_labelled_uninterrupted(example):
139
+ return not example['interrupted'] and example['transcription']
140
+
141
+ ds = ds.filter(filter_interrupted, desc="filter dataset")
142
+ ```
143
 
144
  In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
145
 
146
  ```python
147
  from datasets import load_dataset
148
 
149
+ ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
150
 
151
  def prepare_dataset(batch):
152
+ chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
153
+ for char in chars_to_remove:
154
+ batch["transcription"] = batch["transcription"].replace(char, "")
 
 
 
 
 
 
 
 
 
 
155
  return batch
156
 
157
  ds = ds.map(prepare_dataset, desc="preprocess dataset")
 
171
 
172
  #### Who are the source language producers?
173
 
174
+
175
+ - Yulia Davudi of village ⁺Hassar ⁺Baba-čanga (N)
176
+ - Nancy George of the village Babari (S)
177
+ - Yosəp bet Yosəp of the village Zumallan (N)
178
+ - Yonan Petrus of the village Mushawa (N)
179
+ - Frederic Ayyubkhan of the village ⁺Spurǧān, (N)
180
+ - Manya Givoyev of Guylasar, Armenia
181
+ - Nadia Aloverdova of Guylasar, Armenia
182
+ - Arsen Mikhaylov of Arzni, Armenia
183
+ - Sophia Danielova of Arzni, Armenia
184
+ - Maryam Gwirgis of Canda, Georgia
185
+ - Natan Khoshaba of the Zumallan (N)
186
+ - Alice bet-Yosəp of the Zumallan (N)
187
+ - Victor Orshan of the village Zumallan (N)
188
+ - Jacob Petrus of the village Gulpashan (S)
189
+ - Merab Badalov of Canda, Georgia
190
 
191
  ### Annotations
192
 
 
200
 
201
  ### Personal and Sensitive Information
202
 
203
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
204
+
205
+ ### Building the Dataset
206
+
207
+ The NENA Speech dataset is built using `build.py`.
208
+
209
+ First, install the necessary requirements.
210
+
211
+ ```
212
+ pip install -r requirements.txt
213
+ ```
214
+
215
+ Next, build the dataset.
216
+
217
+ ```
218
+ python build.py --build
219
+ ```
220
 
221
  ## Considerations for Using the Data
222
 
223
  ### Social Impact of Dataset
224
 
225
+ The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the NENA Speech dataset.
226
 
227
  ### Discussion of Biases
228
 
229
+ Given that there are communities with only
 
 
 
 
230
 
231
  ## Additional Information
232
 
 
236
 
237
  ### Citation Information
238
 
239
+ This work has not yet been published.