jeli-data-manifest / README.md
diarray's picture
Update Metadata: Remove feature specification
94b38d2
metadata
language:
  - bm
  - fr
pretty_name: Jeli-ASR Audio Dataset
tags:
  - audio
  - transcription
  - multilingual
  - Bambara
  - French
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
  - translation
task_ids:
  - audio-language-identification
  - keyword-spotting
annotations_creators:
  - semi-expert
language_creators:
  - crowdsourced
source_datasets:
  - jeli-asr
size_categories:
  - 10GB<
dataset_info:
  audio_format: wav
  total_audio_files: 11533
  total_duration_hours: ~30
description: >
  The **Jeli Audio Dataset** is a multilingual audio dataset containing audio
  samples 

  in Bambara and French. Each audio file is paired with its transcription in
  Bambara or 

  its translation in French (available in manifest files). The dataset is
  designed for tasks 

  like automatic speech recognition (ASR) and translation. 

  Data was recorded in an organized setup in Mali with griots and
  semi-professionally transcribed, and translated into French.

jeli-asr-data-manifest

This repository contains a resampled version of jeli-asr dataset with correponding NeMo data manifests.

Directory Structure

The directory structure is as follows:

jeli-data-manifest/
│
├── audios/
│   ├── train/
│   └── test/
│
├── french-manifests/
│   ├── train_french_manifest.json
│   └── test_french_manifest.json
│
├── manifests/
│   ├── train_manifest.json
│   └── test_manifest.json
│
└── scripts/
    └── create_manifest.py
    └── clean_tsv.py

1. audios/

This directory contains the audio files (.wav format) of every example in the dataset. The audio files are split into two subdirectories:

  • train/: Contains audio files used for training.
  • test/: Contains audio files used for testing.

The audio files vary in length and correspond to each entry in the manifest files. They are referenced by file paths in the manifest files.

2. manifests/

This directory contains the manifest files used for training speech recognition (ASR) models. There are two JSON files:

  • train_manifest.json: Contains file paths, durations, and transcriptions for the training set.
  • test_manifest.json: Contains file paths, durations, and transcriptions for the test set.

Each line in the manifest files is a JSON object with the following structure:

{
  "audio_filepath": "jeli-data-manifest/audios/train/griots_r19-1609461-1627744.wav",
  "duration": 18.283,
  "text": "I kun tɛ kɔrɔta maa min si kakɔrɔ n'ita ye, i ŋɛ t'a ŋɛ ye..."
}
  • audio_filepath: The relative path to the corresponding audio file.
  • duration: The duration of the audio file in seconds.
  • text: The transcription of the audio in Bambara.

3. french-manifests/

This directory contains French equivalent manifest files for the dataset. The structure is similar to the manifests/ directory but with French transcriptions:

  • train_french_manifest.json: Contains the French transcriptions for the training set.
  • test_french_manifest.json: Contains the French transcriptions for the test set.

4. scripts/

This directory contains scripts used to process the data and create manifest files:

  • create_manifest.py: A script used to create manifest files for training and testing. It re-samples the audio files published as the first version of Jeli-ASR dataset and generates the corresponding JSON manifest files.
  • clean_tsv.py: Script to remove some of the most common issues in the .tsv transcription files created during the last revision work on the dataset in January 2023, such as unwanted characters (", <>), consecutive tabs (making some rows incositent) and spacing errors

Dataset Overview

The dataset consists of 11,533 audio-transcription pairs:

  • Training set: 9,803 examples (85%)
  • Test set: 1,730 examples (15%)

Each audio file is paired with a transcription in Bambara in the manifest files, and the corresponding French transcriptions are available in the french-manifests/ directory.

Usage

The manifest files are specifically created for training Automatic Speech Recognition (ASR) models in NVIDIA NeMo framework, but they can be used with any other framework that supports manifest-based input formats or reformated for any other use or framework.

To use the dataset, simply load the manifest files (train_manifest.json and test_manifest.json) in your training script. The file paths for the audio files and the corresponding transcriptions are already provided in these manifest files.

Downloading the dataset:

from datasets import load_dataset

# Clone dataset repository with directory structure
!git clone https://huggingface.co/datasets/RobotsMali/jeli-data-manifest

# Load the dataset into Hugging Face Dataset object
dataset = load_dataset("jeli-data-manifest/manifests/train_manifest.json")

# The directory structure remains intact for additional file access


### Example NeMo Usage

Finetuning with Nemo:

from nemo.collections.asr.models import ASRModel
train_manifest = 'jeli-data-manifest/manifests/train_manifest.json'
test_manifest = 'jeli-data-manifest/manifests/test_manifest.json'

asr_model = ASRModel.from_pretrained("QuartzNet15x5Base-En")

# Adapt the model's vocab before training

asr_model.setup_training_data(train_data_config={'manifest_filepath': train_manifest})
asr_model.setup_validation_data(val_data_config={'manifest_filepath': test_manifest})

Issues

This version was created after some shallow cleaning on the transcriptions and resamplimg work. It has conserved most of the issues of the original dataset such as:

  • Misaligned / Invalid segmentation
  • Language / Incorrect transcriptions
  • Non-standardized naming conventions

Citation

If you use this dataset in your research or project, please give credit to the creators of the original Jeli-ASR dataset.