File size: 2,377 Bytes
fed092a
62b7482
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fed092a
 
62b7482
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
language: 
- bn
language_bcp47:
- bn-BD
tags:
- automatic-speech-recognition
- bn
- common_voice_9_0
- openslr_SLR53
datasets:
- common_voice_bn
- openSLR53
- multilingual_librispeech
metrics:
- wer
- cer
model-index:
- name: shahruk10/wav2vec2-xls-r-300m-bengali-commonvoice
  results:
  - task:
      type: automatic-speech-recognition
      name: Speech Recognition
    dataset:
      type: common_voice_9_0
      name: Common Voice (Bengali)
      args: common_voice_bn
    metrics:
    - type: wer
      value: 0.01793038418929547
      name: Validation WER with 5-gram LM
    - type: cer
      value: 0.08078964599673999
      name: Validation CER with 5-gram LM
license: apache-2.0
---

# Wav2Vec2-XLS-R-300M-Bengali-CommonVoice

- This model is a fine-tuned version of [arijitx/wav2vec2-xls-r-300m-bengali](https://huggingface.co/arijitx/wav2vec2-xls-r-300m-bengali) on the the Common Voice 9.0 Bengali dataset. In total, the model was trained on ~300 hours of Bengali (Bangladesh accent) 16 kHz audio data.

- The training and and validation partitions used were provided by the organizers of the [BUET CSE Fest 2022 DL Sprint Competition on Kaggle](https://www.kaggle.com/competitions/dlsprint).

- The model placed first on both the public and private leader boards.

- A 5-gram language model generated from the training split was used with model.

## Metrics

- The model was evaluated using Word Error Rate (WER) and Character Error Rate (CER) for the validation set. At the time, the test set labels were not made available by the organizers of the Kaggle competition which provided the data splits for training.
 

|  Model  | Split |  CER  |  WER |
|:-------:|:-----:|:-----:|:------:|
| With 5-gram LM | Validation | 0.08079 | 0.017939 |


## Training

- The training notebook for this model can be found on Kaggle [here](https://www.kaggle.com/code/shahruk10/training-notebook-wav2vec2).

- The inference notebook for this model can be found on Kaggle [here](https://www.kaggle.com/code/shahruk10/inference-notebook-wav2vec2).

- The model was first trained for 15 epochs on the training split (with on-the-fly augmentation). Dropouts were enabled and a cosine decay learning rate schedule starting from 3e-5 was used.

- The best iteration from the first run was further fine-tuned for 5 epochs at constant learning rate of 1e-7 with dropouts disabled.