File size: 4,476 Bytes
1003855
816cc91
 
1003855
db7fd0e
 
816cc91
 
 
2ec8fb5
816cc91
 
 
 
 
 
 
 
 
 
2c43b4d
 
 
 
816cc91
 
 
 
 
1a99978
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26b42ab
1a99978
26b42ab
1a99978
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
language:
- bn
license: cc-by-nc-4.0
task_categories:
- automatic-speech-recognition
dataset_info:
  features:
  - name: audio
    dtype: audio
  - name: text
    dtype: string
  - name: duration
    dtype: float64
  - name: category
    dtype: string
  - name: source
    dtype: string
  splits:
  - name: train
    num_bytes: 219091915.875
    num_examples: 1753
  download_size: 214321460
  dataset_size: 219091915.875
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# MegaBNSpeech

This model is based on a study aimed at tackling one of the primary challenges in developing Automatic Speech Recognition (ASR) for low-resource languages (Bangla): the limited access to domain-specific labeled data. To address this, the study introduces a pseudo-labeling approach to develop a domain-agnostic ASR dataset.

The methodology led to the creation of a robust 20k+ hours labeled Bangla speech dataset, which encompasses a wide variety of topics, speaking styles, dialects, noisy environments, and conversational scenarios. Using this data, a conformer-based ASR system was designed. The effectiveness of the model, especially when trained on pseudo-labeled data, was benchmarked against publicly available datasets and compared with other models. The research promises that experimental resources stemming from this study will be made publicly available.

## How to use:

The datasets library provides the capability to load and process your dataset efficiently using just Python. You can easily download and set up the dataset on your local drive with a single call using the *load_dataset* function.
```python
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech", split="train")
```

With the datasets library, you have the option to stream the dataset in real-time by appending the streaming=True parameter to the load_dataset function. In streaming mode, the dataset loads one sample at a time instead of storing the whole dataset on the disk.
```python
from datasets import load_dataset
dataset = load_dataset("hishab/MegaBNSpeech", split="train", streaming=True)
print(next(iter(dataset)))
```
## Speech Recognition (ASR)

```python
from datasets import load_dataset

mega_bn_asr = load_dataset("hishab/MegaBNSpeech")

# see structure
print(mega_bn_asr)

# load audio sample on the fly
audio_input = mega_bn_asr["train"][0]["audio"]  # first decoded audio sample
transcription = mega_bn_asr["train"][0]["transcription"]  # first transcription
# use `audio_input` and `transcription` to fine-tune your model for ASR
```
## Data Structure
- The dataset was developed using a pseudo-labeling approach.
- The largest collection of Bangla audio-video data was curated and cleaned from various Bangla TV channels on YouTube. This data covers varying domains, speaking styles, dialects, and communication channels.
- Alignments from two ASR systems were leveraged to segment and automatically annotate the audio segments.
- The created dataset was used to design an end-to-end state-of-the-art Bangla ASR system.
 
### Data Instances
- Size of downloaded dataset files: ___ GB
- Size of the generated dataset: ___ MB
- Total amount of disk used: ___ GB

An example of a data instance looks as follows:
```
 {
  "id": 0,
  "audio_path": "data/train/wav/UCPREnbhKQP-hsVfsfKP-mCw_id_2kux6rFXMeM_85.wav",
  "transcription": "পরীক্ষার মূল্য তালিকা উন্মুক্ত স্থানে প্রদর্শনের আদেশ দেন এই আদেশ পাওয়ার",
  "duration": 5.055
 }
```
### Data Fields
The data fields are written below.
- **id** (int): ID of audio sample
- **audio_path** (str): Path to the audio file
- **transcription** (str): Transcription of the audio file
- **duration** : 5.055

### Dataset Creation
The dataset was developed using a pseudo-labeling approach.
An extensive, large-scale, and high-quality speech dataset of approximately 20,000 hours was developed for domain-agnostic Bangla ASR.

## Social Impact of Dataset

## Limitations

## Citation Information
You can access the MegaBNSpeech paper at _________________ Please cite the paper when referencing the MegaBNSpeech corpus as:
```
@article{_______________,
  title = {_______________________________},
  author = {___,___,___,___,___,___,___,___},
  journal={_______________________________},
  url = {_________________________________},
  year = {2023},