|
--- |
|
license: cc-by-4.0 |
|
language: |
|
- bn |
|
- gu |
|
- hi |
|
- kn |
|
- ml |
|
- mr |
|
- ta |
|
- te |
|
- en |
|
- pa |
|
- ur |
|
pretty_name: WordProject |
|
extra_gated_fields: |
|
Name: text |
|
Email: text |
|
Affiliation: text |
|
Position: text |
|
size_categories: |
|
- 10K<n<100K |
|
multilinguality: |
|
- multilingual |
|
dataset_info: |
|
- config_name: en2indic |
|
features: |
|
- name: chunked_audio_filepath |
|
dtype: audio |
|
- name: text |
|
dtype: string |
|
- name: pred_text |
|
dtype: string |
|
- name: alignment_score |
|
dtype: float64 |
|
- name: audio_filepath |
|
dtype: string |
|
- name: start_time |
|
dtype: float64 |
|
- name: duration |
|
dtype: float64 |
|
- name: hi_text |
|
dtype: string |
|
- name: hi_mining_score |
|
dtype: float64 |
|
- name: gu_text |
|
dtype: string |
|
- name: gu_mining_score |
|
dtype: float64 |
|
- name: mr_text |
|
dtype: string |
|
- name: mr_mining_score |
|
dtype: float64 |
|
- name: ta_text |
|
dtype: string |
|
- name: ta_mining_score |
|
dtype: float64 |
|
- name: kn_text |
|
dtype: string |
|
- name: kn_mining_score |
|
dtype: float64 |
|
- name: te_text |
|
dtype: string |
|
- name: te_mining_score |
|
dtype: float64 |
|
- name: ml_text |
|
dtype: string |
|
- name: ml_mining_score |
|
dtype: float64 |
|
- name: pa_text |
|
dtype: string |
|
- name: pa_mining_score |
|
dtype: float64 |
|
- name: or_text |
|
dtype: string |
|
- name: or_mining_score |
|
dtype: float64 |
|
- name: bn_text |
|
dtype: string |
|
- name: bn_mining_score |
|
dtype: float64 |
|
- name: ur_text |
|
dtype: string |
|
- name: ur_mining_score |
|
dtype: float64 |
|
splits: |
|
- name: en2indic |
|
num_bytes: 459022313 |
|
num_examples: 31396 |
|
download_size: 885392207 |
|
dataset_size: 459022313 |
|
- config_name: indic2en |
|
features: |
|
- name: chunked_audio_filepath |
|
dtype: audio |
|
- name: text |
|
dtype: string |
|
- name: pred_text |
|
dtype: string |
|
- name: audio_filepath |
|
dtype: string |
|
- name: start_time |
|
dtype: float64 |
|
- name: duration |
|
dtype: float64 |
|
- name: alignment_score |
|
dtype: float64 |
|
- name: en_text |
|
dtype: string |
|
- name: en_mining_score |
|
dtype: float64 |
|
splits: |
|
- name: bengali |
|
num_bytes: 88064466 |
|
num_examples: 86763 |
|
- name: gujarati |
|
num_bytes: 97086465 |
|
num_examples: 86907 |
|
- name: hindi |
|
num_bytes: 92964858 |
|
num_examples: 88566 |
|
- name: kannada |
|
num_bytes: 100042338 |
|
num_examples: 88566 |
|
- name: malayalam |
|
num_bytes: 104408985 |
|
num_examples: 88482 |
|
- name: marathi |
|
num_bytes: 33469422 |
|
num_examples: 29154 |
|
- name: odia |
|
num_bytes: 66077856 |
|
num_examples: 56968 |
|
- name: punjabi |
|
num_bytes: 38503306 |
|
num_examples: 2226 |
|
- name: tamil |
|
num_bytes: 47710294 |
|
num_examples: 37752 |
|
- name: telugu |
|
num_bytes: 56516434 |
|
num_examples: 58344 |
|
- name: urdu |
|
num_bytes: 50942154 |
|
num_examples: 58864 |
|
download_size: 1353556517 |
|
dataset_size: 775786578 |
|
configs: |
|
- config_name: en2indic |
|
- config_name: en2indic |
|
data_files: |
|
- split: en2indic |
|
path: En-Indic/train-* |
|
- config_name: indic2en |
|
data_files: |
|
- split: bengali |
|
path: Indic-En/ben/train-* |
|
- split: gujarati |
|
path: Indic-En/guj/train-* |
|
- split: hindi |
|
path: Indic-En/hin/train-* |
|
- split: kannada |
|
path: Indic-En/kan/train-* |
|
- split: malayalam |
|
path: Indic-En/mal/train-* |
|
- split: marathi |
|
path: Indic-En/mar/train-* |
|
- split: odia |
|
path: Indic-En/ory/train-* |
|
- split: punjabi |
|
path: Indic-En/pan/train-* |
|
- split: tamil |
|
path: Indic-En/tam/train-* |
|
- split: telugu |
|
path: Indic-En/tel/train-* |
|
- split: urdu |
|
path: Indic-En/urd/train-* |
|
--- |
|
|
|
# BhasaAnuvaad: A Speech Translation Dataset for 13 Indian Languages |
|
|
|
<div style="display: flex; gap: 5px;"> |
|
<a href="https://github.com/AI4Bharat/BhasaAnuvaad"><img src="https://img.shields.io/badge/GITHUB-black?style=flat&logo=github&logoColor=white" alt="GitHub"></a> |
|
<a href="http://arxiv.org/abs/2411.04699"><img src="https://img.shields.io/badge/arXiv-2411.02538-red?style=flat" alt="ArXiv"></a> |
|
<a href="https://creativecommons.org/licenses/by/4.0/"><img src="https://img.shields.io/badge/License-CC%20BY%204.0-lightgrey.svg" alt="CC BY 4.0"></a> |
|
</div> |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** [Bhasaanuvaad Collection](https://huggingface.co/collections/ai4bharat/bhasaanuvaad-672b3790b6470eab68b1cb87) |
|
- **Repository:** [Github](https://github.com/AI4Bharat/BhasaAnuvaad) |
|
- **Paper:** [BhasaAnuvaad: A Speech Translation Dataset for 13 Indian Languages |
|
](https://arxiv.org/abs/2411.04699) |
|
|
|
|
|
## Overview |
|
|
|
BhasaAnuvaad, is the largest Indic-language AST dataset spanning over 44,400 hours of speech and 17M text segments for 13 of 22 scheduled Indian languages and English. |
|
|
|
This repository consists of parallel data for Speech Translation from [WordProject](https://www.wordproject.org/), a subset of BhasaAnuvaad. |
|
|
|
### How to use |
|
|
|
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function. |
|
|
|
|
|
- Before downloading first follow the following steps: |
|
|
|
1. Gain access to the dataset and get the HF access token from: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens). |
|
2. Install dependencies and login HF: |
|
- Install Python |
|
- Run `pip install librosa soundfile datasets huggingface_hub[cli]` |
|
- Login by `huggingface-cli login` and paste the HF access token. Check [here](https://huggingface.co/docs/huggingface_hub/guides/cli#huggingface-cli-login) for details. |
|
|
|
For example, to download the (indic2en or en2indic) config, simply specify the corresponding config name (i.e., "indic2en" for Hindi): |
|
```python |
|
from datasets import load_dataset |
|
bhasaanuvaad = load_dataset("ai4bharat/WordProject", "indic2en", split="hindi") |
|
``` |
|
or |
|
|
|
```python |
|
from datasets import load_dataset |
|
bhasaanuvaad = load_dataset("ai4bharat/Wordproject", "en2indic", split="en2indic") |
|
``` |
|
|
|
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk. |
|
```python |
|
from datasets import load_dataset |
|
bhasaanuvaad = load_dataset("ai4bharat/WordProject", "indic2en", split="hindi", streaming=True) |
|
print(next(iter(bhasaanuvaad))) |
|
``` |
|
|
|
|
|
## Citation |
|
|
|
If you use BhasaAnuvaad in your work, please cite us: |
|
|
|
```bibtex |
|
@article{jain2024bhasaanuvaad, |
|
title = {BhasaAnuvaad: A Speech Translation Dataset for 14 Indian Languages}, |
|
author = {Sparsh Jain and Ashwin Sankar and Devilal Choudhary and Dhairya Suman and Nikhil Narasimhan and Mohammed Safi Ur Rahman Khan and Anoop Kunchukuttan and Mitesh M Khapra and Raj Dabre}, |
|
year = {2024}, |
|
journal = {arXiv preprint arXiv: 2411.04699} |
|
} |
|
``` |
|
|
|
## License |
|
|
|
This dataset is released under the [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). |
|
|
|
## Contact |
|
|
|
For any questions or feedback, please contact: |
|
- Raj Dabre (raj.dabre@cse.iitm.ac.in) |
|
- Sparsh Jain (sjshiva8287@gmail.com) |
|
- Ashwin Sankar (ashwins1211@gmail.com) |
|
- Nikhil Narasimhan (nikhil.narasimhan99@gmail.com) |
|
- Mohammed Safi Ur Rahman Khan (safikhan2000@gmail.com) |
|
|
|
Please contact us for any copyright concerns. |
|
|
|
## Links |
|
|
|
- [GitHub Repository 💻](https://github.com/AI4Bharat/BhasaAnuvaad.git) |
|
- [Paper 📄](http://arxiv.org/abs/2411.04699) |
|
- [Hugging Face Dataset 🤗](https://huggingface.co/collections/ai4bharat/bhasaanuvaad-672b3790b6470eab68b1cb87) |