The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider removing the loading script and relying on automated data support (you can use convert_to_parquet from the datasets library). If this is not possible, please open a discussion for direct help.

Dataset Card for NENA Speech Dataset 1.0 (test)

⚠️ This is a temperary repository that will be replaced by end of 2023

Dataset Summary

NENA Speech is a multimodal dataset to help teach machines how real people speak the Northeastern Neo-Aramaic (NENA) dialects.

The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.

NENA Speech consists of multimodal examples of speech of the NENA dialects. While all documented NENA dialects are included, not all have data yet, and some will never due to recent loss of their final speakers.

Languages

The NENA dialects form a very diverse group of Aramaic dialects spoken by Christian and Jewish communities indigenous to northwestern Iran, northern Iraq, and southeastern Türkiye.

Speakers of the Christian dialects call their language Assyrian and Chaldean in English. In their language these speakers use multiple different terms (e.g. suráy, sureth, ḥadiṯan, senaya). Speakers of the Jewish dialects call their language lišana deni, lišanət noshan, lišana nosha, lišana didan, all meaning "our language". Some names reflect the consciousness of it being a specifically Jewish language (e.g. lišan hozaye, hulaula).

NENA Speech has a subset for all of the over 150 NENA dialects. Not all dialects have examples available yet. Some dialects will never have examples available due to the loss of their final speakers in recent years.

How to Use

The datasets library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the load_dataset function.

For example, simply specify the corresponding language config name (e.g., "urmi (christian)" for the dialect of the Assyrian Christians of Urmi):

from datasets import load_dataset

nena_speech = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")

To find out more about loading and preparing audio datasets, head over to hf.co/blog/audio-datasets.

Dataset Structure

Data Instances

The NENA Speech dataset is a multimodal dataset that consists of three different kinds of examples:

  1. Unlabeled speech examples: these contain audio of speech (audio) but no accompanying transcription (transcription) or translation (translation). This is useful for representation learning.
  2. Transcribed speech examples: these contain both audio and transcription of speech. These are useful for machine learning tasks like automatic speech recognition and speech synthesis.
  3. Transcribed and translated speech examples: these kinds of examples contain audio, transcription, and translation of speech. These are useful for tasks like multimodal translation.

Make sure to filter for the kinds of examples you need for your task before before using it.

{
  "transcription": "gu-mdìta.ˈ",
  "translation": "in the town.",
  "audio": {
    "path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
    "array": array([-0.00048828, -0.00018311, -0.00137329, ...,  0.00079346, 0.00091553,  0.00085449], dtype=float32),
    "sampling_rate": 48000
  },	
  "locale": "IRN",
  "proficiency": "proficient as mom",
  "age": "70's",
  "crowdsourced": true,
  "unlabeled": true,
  "interrupted": true,
  "client_id": "gwurt1g1ln"	,
  "path": "et/train/nena_speech_0uk14ofpom196aj.mp3",
}

Data Fields

  • transcription (string): The transcription of what was spoken (e.g. "beta")
  • translation (string): The translation of what was spoken in English (e.g. "house")
  • audio (dict): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: dataset[0]["audio"] the audio file is automatically decoded and resampled to dataset.features["audio"].sampling_rate. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the "audio" column, i.e. dataset[0]["audio"] should always be preferred over dataset["audio"][0].
  • locale (string): The locale of the speaker
  • proficiency (string): The proficiency of the speaker
  • age (string): The age of the speaker (e.g. "20's", "50's", "100+")
  • crowdsourced (bool): Indicates whether the example was crowdsourced as opposed to collected from existing language documentation resources
  • interrupted (bool): Indicates whether the example was interrupted with the speaker making sound effects or switching into another language
  • client_id (string): An id for which client (voice) made the recording
  • path (string): The path to the audio file

Data Splits

The examples have been subdivided into three portions:

  1. dev: the validation split (10%)
  2. test: the test split (10%)
  3. train: the train split (80%)

The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.

Dataset Creation

Building the Dataset

The NENA Speech dataset itself is built using build.py.

First, install the necessary requirements.

pip install -r requirements.txt

Next, build the dataset.

python build.py --build

Finally, push to the HuggingFace dataset repository.

Personal and Sensitive Information

The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.

Data Preprocessing

The dataset consists of three different kinds of examples (see Data Instances).

Make sure to filter for the kinds of examples you need for your task before before using it. For example, for automatic speech recognition you will want to filter for examples with transcriptions.

In most tasks, you will want to filter out examples that are interrupted (e.g. by the speaker making sound effects, by the speaker switching into a another language).

from datasets import load_dataset

ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")

def filter_for_asr(example):
    return example['transcription'] and not example['interrupted']

ds = ds.filter(filter_for_asr, desc="filter dataset")

Transcriptions include markers of linguistic and acoustic features which may be removed in certain tasks (e.g. word stress, nuclear stress, intonation group markers, vowel length).

from datasets import load_dataset

ds = load_dataset("mnazari/nena_speech_1_0_test", "urmi (christian)", split="train")

def prepare_dataset(batch):
  chars_to_remove = ['ˈ', '̀', '́', '̄', '̆', '.', ',', '?', '!']
  for char in chars_to_remove:
      batch["transcription"] = batch["transcription"].replace(char, "")
  return batch

ds = ds.map(prepare_dataset, desc="preprocess dataset")

Additional Information

Licensing Information

Public Domain, CC-0.

Citation Information

This work has not yet been published.

Downloads last month
13
Edit dataset card