sagorhishab
commited on
Commit
•
de1c6e4
1
Parent(s):
2a87aff
Update README.md
Browse files
README.md
CHANGED
@@ -29,80 +29,7 @@ configs:
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
32 |
-
# MegaBNSpeech
|
33 |
|
34 |
To evaluate the performance of the models, we used four test sets. Two of these were developed as part of the MegaBNSpeech corpus, while the remaining two (Fleurs and Common Voice) are commonly used test sets that are widely recognized by the speech community.
|
35 |
|
36 |
-
## How to use:
|
37 |
-
|
38 |
-
The datasets library provides the capability to load and process your dataset efficiently using just Python. You can easily download and set up the dataset on your local drive with a single call using the *load_dataset* function.
|
39 |
-
```python
|
40 |
-
from datasets import load_dataset
|
41 |
-
dataset = load_dataset("hishab/MegaBNSpeech", split="train")
|
42 |
-
```
|
43 |
-
|
44 |
-
With the datasets library, you have the option to stream the dataset in real-time by appending the streaming=True parameter to the load_dataset function. In streaming mode, the dataset loads one sample at a time instead of storing the whole dataset on the disk.
|
45 |
-
```python
|
46 |
-
from datasets import load_dataset
|
47 |
-
dataset = load_dataset("hishab/MegaBNSpeech", split="train", streaming=True)
|
48 |
-
print(next(iter(dataset)))
|
49 |
-
```
|
50 |
-
## Speech Recognition (ASR)
|
51 |
-
|
52 |
-
```python
|
53 |
-
from datasets import load_dataset
|
54 |
-
|
55 |
-
mega_bn_asr = load_dataset("hishab/MegaBNSpeech")
|
56 |
-
|
57 |
-
# see structure
|
58 |
-
print(mega_bn_asr)
|
59 |
-
|
60 |
-
# load audio sample on the fly
|
61 |
-
audio_input = mega_bn_asr["train"][0]["audio"] # first decoded audio sample
|
62 |
-
transcription = mega_bn_asr["train"][0]["transcription"] # first transcription
|
63 |
-
# use `audio_input` and `transcription` to fine-tune your model for ASR
|
64 |
-
```
|
65 |
-
## Data Structure
|
66 |
-
- The dataset was developed using a pseudo-labeling approach.
|
67 |
-
- The largest collection of Bangla audio-video data was curated and cleaned from various Bangla TV channels on YouTube. This data covers varying domains, speaking styles, dialects, and communication channels.
|
68 |
-
- Alignments from two ASR systems were leveraged to segment and automatically annotate the audio segments.
|
69 |
-
- The created dataset was used to design an end-to-end state-of-the-art Bangla ASR system.
|
70 |
-
|
71 |
-
### Data Instances
|
72 |
-
- Size of downloaded dataset files: ___ GB
|
73 |
-
- Size of the generated dataset: ___ MB
|
74 |
-
- Total amount of disk used: ___ GB
|
75 |
-
|
76 |
-
An example of a data instance looks as follows:
|
77 |
-
```
|
78 |
-
{
|
79 |
-
"id": 0,
|
80 |
-
"audio_path": "data/train/wav/UCPREnbhKQP-hsVfsfKP-mCw_id_2kux6rFXMeM_85.wav",
|
81 |
-
"transcription": "পরীক্ষার মূল্য তালিকা উন্মুক্ত স্থানে প্রদর্শনের আদেশ দেন এই আদেশ পাওয়ার",
|
82 |
-
"duration": 5.055
|
83 |
-
}
|
84 |
-
```
|
85 |
-
### Data Fields
|
86 |
-
The data fields are written below.
|
87 |
-
- **id** (int): ID of audio sample
|
88 |
-
- **audio_path** (str): Path to the audio file
|
89 |
-
- **transcription** (str): Transcription of the audio file
|
90 |
-
- **duration** : 5.055
|
91 |
-
|
92 |
-
### Dataset Creation
|
93 |
-
The dataset was developed using a pseudo-labeling approach.
|
94 |
-
An extensive, large-scale, and high-quality speech dataset of approximately 20,000 hours was developed for domain-agnostic Bangla ASR.
|
95 |
-
|
96 |
-
## Social Impact of Dataset
|
97 |
-
|
98 |
-
## Limitations
|
99 |
-
|
100 |
-
## Citation Information
|
101 |
-
You can access the MegaBNSpeech paper at _________________ Please cite the paper when referencing the MegaBNSpeech corpus as:
|
102 |
-
```
|
103 |
-
@article{_______________,
|
104 |
-
title = {_______________________________},
|
105 |
-
author = {___,___,___,___,___,___,___,___},
|
106 |
-
journal={_______________________________},
|
107 |
-
url = {_________________________________},
|
108 |
-
year = {2023},
|
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
32 |
+
# MegaBNSpeech Test Data
|
33 |
|
34 |
To evaluate the performance of the models, we used four test sets. Two of these were developed as part of the MegaBNSpeech corpus, while the remaining two (Fleurs and Common Voice) are commonly used test sets that are widely recognized by the speech community.
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|