You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Overview

Project

This dataset is part of a larger initiative aimed at empowering Bambara speakers to access global knowledge without language barriers. Our goal is to eliminate the need for Bambara speakers to learn a secondary language before they can acquire new information or skills. By providing a robust dataset for Text-to-Speech (TTS) applications, we aim to support the creation of tools for bambara language, thus democratizing access to knowledge.

Bambara Language

Bambara, also known as Bamanankan, is a Mande language spoken primarily in Mali by millions of people as a mother tongue and second language. It serves as a lingua franca in Mali and is also spoken in neighboring countries (Burkina Faso, Ivory Coast etc...). Bambara is written in both the Latin script and N'Ko script, and it has a rich oral tradition that is integral to Malian culture.

Dataset

Source

The dataset was meticulously compiled with a focus on quality and utility. The source materials were obtained from a rich Bambara content available at Mali Pense. Audio recordings were carefully processed to improve clarity and usability.

Processing

Noise reduction was a critical step in preparing the audio data to ensure high-quality samples. This was achieved using DeepFilterNet, an advanced noise suppression algorithm accessible on GitHub here. The resulting clean audio provides clear and usable samples for TTS development.

To enhance the dataset's applicability in personalized TTS systems, speaker embeddings were generated using the pyannote/embedding model from Huggingface. This embedding captures unique speaker characteristics, allowing for speaker identification and differentiation in TTS applications.

Clustering

Speaker embeddings were clustered using the HDBSCAN algorithm (via the hdbscan pip3 package) to infer speaker identities within the dataset. While this clustering offers a basis for differentiating speakers, it is not infallible. Users are encouraged to use the provided embeddings to refine or generate their own speaker identification as needed for their specific applications.

Dataset Structure

Data Fields

The dataset includes the following fields:

  • audio: This field contains the file path (loaded via huggingface datasets library) to the audio recording of spoken Bambara text. Each audio file corresponds to a single utterance of spoken text.
  • bambara: A string field that contains the transcription of the spoken text in the Bambara language. This transcription corresponds to the content of the audio file.
  • french: A string field with the French translation of the Bambara text. This provides a parallel corpus for those interested in bilingual applications.
  • duration: A float64 field that represents the duration of the audio clip in seconds. It gives an indication of the length of the spoken utterance.
  • speaker_embeddings: A sequence field that holds the numerical vector representing the speaker's voice characteristics. This embedding can be used for speaker identification or distinguishing between different speakers in the dataset.
  • speaker_id: An int32 field that indicates the cluster ID assigned to the speaker based on the HDBSCAN algorithm. This ID helps to identify all utterances from the same speaker across the dataset.

Data Instances

An example from the dataset looks like this:

{
  "audio": Audio({"array": [-2.5, 35...], "path": "path/to/audio.wav", "sampling_rate": 48000}),
  "bambara": "Jigi, i bolo degunnen don wa ?",
  "french": "Jigi, es-tu occupé ?",
  "duration": 2.646,
  "speaker_embeddings": [-2.564516305923462, -20.928389595581055, ...],
  "speaker_id": 5
}

Usage

The dataset is designed for a variety of uses in the field of speech technology, including:

  • Text-to-Speech Synthesis: Researchers and developers can utilize this dataset to train and fine-tune TTS models capable of converting Bambara text into natural-sounding speech.
  • Speech Recognition: The audio samples can aid in the development of Automatic Speech Recognition (ASR) systems that transcribe Bambara speech.
  • Linguistic Research: Linguists can explore the phonetic and prosodic features of Bambara speech.
  • Educational Content Creation: Educators and content creators can develop voice-enabled educational resources in Bambara.

Acknowledgements

This project was made possible through the contributions of various individuals and organizations dedicated to preserving and promoting the Bambara language and culture. We extend our gratitude to Mali Pense for providing the text sources, Rikorose/DeepFilterNet for the noise reduction technology, and Pyannote for the speaker embedding model.

Other Bambara Dataset

Downloads last month
41

Models trained or fine-tuned on oza75/bambara-tts

Collection including oza75/bambara-tts