File size: 9,333 Bytes
d4dfb7d
 
 
 
9536d66
d4dfb7d
 
9536d66
 
 
 
 
 
 
 
 
 
 
d4dfb7d
 
9536d66
 
 
 
 
 
c5d7ad3
2a3040b
c5d7ad3
2a3040b
d4dfb7d
d067682
9536d66
 
5dec125
 
c5d7ad3
5dec125
 
988fd9f
5dec125
 
 
 
 
988fd9f
5dec125
988fd9f
 
5dec125
988fd9f
5dec125
 
988fd9f
5dec125
 
 
 
28c08f2
 
c5d7ad3
9536d66
24c1474
9536d66
5dec125
 
c46291f
5dec125
c5d7ad3
 
 
bba3066
c5d7ad3
5dec125
 
 
 
988fd9f
5dec125
 
9536d66
988fd9f
9536d66
 
 
24c1474
9536d66
 
 
 
24c1474
9536d66
 
 
 
 
 
 
 
988fd9f
 
3f3b4f3
988fd9f
 
 
 
 
5dec125
9536d66
5dec125
 
 
 
 
 
 
 
 
 
 
3f3b4f3
5dec125
 
 
9536d66
 
 
 
 
5dec125
 
 
 
 
 
 
 
 
 
9536d66
 
 
988fd9f
 
45e549e
 
 
9536d66
 
 
 
 
988fd9f
9536d66
988fd9f
9536d66
 
 
988fd9f
5dec125
988fd9f
 
 
9536d66
 
 
988fd9f
 
 
9536d66
 
 
988fd9f
24c1474
 
 
5dec125
24c1474
 
 
 
 
 
 
 
 
 
 
 
9536d66
5dec125
 
 
 
 
 
 
 
988fd9f
5dec125
988fd9f
 
 
5dec125
 
 
 
 
 
988fd9f
 
5dec125
988fd9f
5dec125
 
988fd9f
5dec125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
988fd9f
9536d66
 
 
988fd9f
9536d66
 
 
988fd9f
5dec125
 
 
988fd9f
9536d66
 
 
 
 
e84061a
9536d66
 
 
24c1474
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
---
pretty_name: NENA Speech Dataset 1.0 (test)
annotations_creators:
  - crowdsourced
  - Geoffrey Khan
language_creators:
  - crowdsourced
language:
  - aii
  - cld
  - huy
  - lsd
  - trg
  - aij
  - bhn
  - hrt
  - kqd
  - syn
license:
  - cc0-1.0
multilinguality:
  - multilingual
task_categories:
  - automatic-speech-recognition
  - text-to-speech
  - translation
size_categories:
  - 10K<n<100K
  - 1K<n<10K
  - n<1K
---

# Dataset Card for NENA Speech Dataset 1.0 (test)

## Table of Contents

- [Dataset Summary](#dataset-summary)
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [How to Use](#how-to-use)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  <!-- - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations) -->
  - [Building the Dataset](#building-the-dataset)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
<!-- - [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations) -->
- [Additional Information](#additional-information)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)

## ⚠️ This is a temperary repository that will be replaced by end of 2023

## Dataset Summary

NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.

The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.

NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.

## Dataset Description

- **Homepage**: https://crowdsource.nenadb.dev/
- **Point of Contact:** [Matthew Nazari](mailto:matthewnazari@college.harvard.edu)

## Languages

The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.

Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).

NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.

## How to Use

The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. 

For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):

```python
from datasets import load_dataset

nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")
```

To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).

## Dataset Structure

### Data Instances

The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:

1. **Unlabeled speech examples:** these contain audio of speech (`audio`) but no accompanying transcription (`transcription`) or translation (`translation`). This is useful for representation learning.
2. **Transcribed speech examples:** these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
3. **Transcribed and translated speech examples:** these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.

Make sure to filter for the kinds of examples you need for your task before before using it.

```json
{
  "transcription": "gu-mdìta.ˈ",
  "translation": "in the town.",
  "audio": {
    "path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
    "array": array([-0.00048828, -0.00018311, -0.00137329, ...,  0.00079346, 0.00091553,  0.00085449], dtype=float32),
    "sampling_rate": 48000
  },	
  "locale": "IRN",
  "proficiency": "proficient as mom",
  "age": "70's",
  "crowdsourced": true,
  "unlabeled": true,
  "interrupted": true,
  "client_id": "gwurt1g1ln"	,
  "path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}
```

### Data Fields

- `transcription (string)`: The transcription of what was spoken (e.g. `"beta"`)
- `translation (string)`: The translation of what was spoken in English (e.g. `"house"`)
- `audio (dict)`: A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. `dataset[0]["audio"]` should always be preferred over `dataset["audio"][0]`.
- `locale (string)`: The locale of the speaker
- `proficiency (string)`: The proficiency of the speaker
- `age (string)`: The age of the speaker (e.g. `"20's"`, `"50's"`, `"100+"`)
- `crowdsourced (bool)`: Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
- `interrupted (bool)`: Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
- `client_id (string)`: An id for which client (voice) made the recording
- `path (string)`: The path to the audio file

### Data Splits

The examples have been subdivided into three portions:

1. **dev:** the validation split (10%)
3. **test:** the test split (10%)
2. **train:** the train split (80%)

The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.

## Dataset Creation

<!-- ### Curation Rationale

[Needs More Information]

### Source Data

#### Language Documentation Resources

[Needs More Information]

#### Webscraping Facebook

[Needs More Information]

#### Crowdsourcing

[Needs More Information]

### Annotations

[Needs More Information] -->

### Building the Dataset

The NENA Speech dataset itself is built using `build.py`.

First, install the necessary requirements.

```
pip install -r requirements.txt
```

Next, build the dataset.

```
python build.py --build
```

Finally, push to the HuggingFace dataset repository.

## Personal and Sensitive Information

The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.

## Data Preprocessing

The dataset consists of three different kinds of examples (see [Data Instances](#data-instances)).

Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.

In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).

```python
from datasets import load_dataset

ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")

def filter_for_asr(example):
    return example['transcription'] and not example['interrupted']

ds = ds.filter(filter_for_asr, desc="filter dataset")
```

Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).

```python
from datasets import load_dataset

ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")

def prepare_dataset(batch):
  chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
  for char in chars_to_remove:
      batch["transcription"] = batch["transcription"].replace(char, "")
  return batch

ds = ds.map(prepare_dataset, desc="preprocess dataset")
```

<!-- ## Considerations for Using the Data

### Social Impact of Dataset

[Needs More Information]

### Discussion of Biases

[Needs More Information]

### Other Known Limitations

[Needs More Information] -->

## Additional Information

### Licensing Information

Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/).

### Citation Information

This work has not yet been published.