SarwarShafee commited on
Commit
1a99978
1 Parent(s): db7fd0e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -1
README.md CHANGED
@@ -4,4 +4,90 @@ task_categories:
4
  - automatic-speech-recognition
5
  language:
6
  - bn
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - automatic-speech-recognition
5
  language:
6
  - bn
7
+ ---
8
+
9
+ # MegaBNSpeech
10
+
11
+ This model is based on a study aimed at tackling one of the primary challenges in developing Automatic Speech Recognition (ASR) for low-resource languages (Bangla): the limited access to domain-specific labeled data. To address this, the study introduces a pseudo-labeling approach to develop a domain-agnostic ASR dataset.
12
+
13
+ The methodology led to the creation of a robust 20k+ hours labeled Bangla speech dataset, which encompasses a wide variety of topics, speaking styles, dialects, noisy environments, and conversational scenarios. Using this data, a conformer-based ASR system was designed. The effectiveness of the model, especially when trained on pseudo-labeled data, was benchmarked against publicly available datasets and compared with other models. The research promises that experimental resources stemming from this study will be made publicly available.
14
+
15
+ ## How to use:
16
+
17
+ The datasets library provides the capability to load and process your dataset efficiently using just Python. You can easily download and set up the dataset on your local drive with a single call using the *load_dataset* function.
18
+ ```python
19
+ from datasets import load_dataset
20
+ dataset = load_dataset("hishab/MegaBNSpeech", split="train")
21
+ ```
22
+
23
+ With the datasets library, you have the option to stream the dataset in real-time by appending the streaming=True parameter to the load_dataset function. In streaming mode, the dataset loads one sample at a time instead of storing the whole dataset on the disk.
24
+ ```python
25
+ from datasets import load_dataset
26
+ dataset = load_dataset("hishab/MegaBNSpeech", split="train", streaming=True)
27
+ print(next(iter(dataset)))
28
+ ```
29
+ ## Speech Recognition (ASR)
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ mega_bn_asr = load_dataset("hishab/MegaBNSpeech")
35
+
36
+ # see structure
37
+ print(mega_bn_asr)
38
+
39
+ # load audio sample on the fly
40
+ audio_input = mega_bn_asr["train"][0]["audio"] # first decoded audio sample
41
+ transcription = mega_bn_asr["train"][0]["transcription"] # first transcription
42
+ # use `audio_input` and `transcription` to fine-tune your model for ASR
43
+ ```
44
+ ## Data Structure
45
+ - The dataset was developed using a pseudo-labeling approach.
46
+ - The largest collection of Bangla audio-video data was curated and cleaned from various Bangla TV channels on YouTube. This data covers varying domains, speaking styles, dialects, and communication channels.
47
+ - Alignments from two ASR systems were leveraged to segment and automatically annotate the audio segments.
48
+ - The created dataset was used to design an end-to-end state-of-the-art Bangla ASR system.
49
+
50
+ ### Data Instances
51
+ - Size of downloaded dataset files: ___ GB
52
+ - Size of the generated dataset: ___ MB
53
+ - Total amount of disk used: ___ GB
54
+
55
+ An example of a data instance looks as follows:
56
+ ```
57
+ {
58
+ "id": 0,
59
+ "channel_id": "UCPREnbhKQP-hsVfsfKP-mCw",
60
+ "channel_name": "NEWS24",
61
+ "video_id": "2kux6rFXMeM",
62
+ "audio_path": "data/train/wav/UCPREnbhKQP-hsVfsfKP-mCw_id_2kux6rFXMeM_85.wav",
63
+ "transcription": "পরীক্ষার মূল্য তালিকা উন্মুক্ত স্থানে প্রদর্শনের আদেশ দেন এই আদেশ পাওয়ার",
64
+ "duration": 5.055
65
+ }
66
+ ```
67
+ ### Data Fields
68
+ The data fields are written below.
69
+ - **id** (int): ID of audio sample
70
+ - **num_samples** (int): Number of float values
71
+ - **path** (str): Path to the audio file
72
+ - **audio** (dict): Audio object including loaded audio array, sampling rate and path ot audio
73
+ - **raw_transcription** (str): The non-normalized transcription of the audio file
74
+ - **transcription** (str): Transcription of the audio file
75
+ - **lang_id** (int): Class id of language
76
+
77
+ ### Dataset Creation
78
+ The dataset was developed using a pseudo-labeling approach.
79
+ An extensive, large-scale, and high-quality speech dataset of approximately 20,000 hours was developed for domain-agnostic Bangla ASR.
80
+
81
+ ## Social Impact of Dataset
82
+
83
+ ## Limitations
84
+
85
+ ## Citation Information
86
+ You can access the MegaBNSpeech paper at _________________ Please cite the paper when referencing the MegaBNSpeech corpus as:
87
+ ```
88
+ @article{_______________,
89
+ title = {_______________________________},
90
+ author = {___,___,___,___,___,___,___,___},
91
+ journal={_______________________________},
92
+ url = {_________________________________},
93
+ year = {2023},