Tahsin-Mayeesha commited on
Commit
bf5bac4
1 Parent(s): 87aa77d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - bn
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - openslr_SLR53
8
+ - robust-speech-event
9
+ datasets:
10
+ - openslr
11
+ - SLR53
12
+ metrics:
13
+ - wer
14
+ - cer
15
+ model-index:
16
+ - name: Tahsin-Mayeesha/wav2vec2-bn-300m
17
+ results:
18
+ - task:
19
+ type: automatic-speech-recognition
20
+ name: Speech Recognition
21
+ dataset:
22
+ type: openslr
23
+ name: Open SLR
24
+ args: SLR66
25
+ metrics:
26
+ - type: wer # Required. Example: wer
27
+ value: 0.31104373941386626 # Required. Example: 20.90
28
+ name: Test WER # Optional. Example: Test WER
29
+ - type: cer
30
+ value: 0.07263099973420006
31
+ name: Test CER
32
+ ---
33
+
34
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Wer: 0.3467
37
+ - Cer : 0.072
38
+
39
+ Note : 1% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section.
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 7.5e-05
45
+ - train_batch_size: 16
46
+ - eval_batch_size: 16
47
+ - gradient_accumulation_steps: 4
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_steps: 2000
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Framework versions
54
+
55
+ - Transformers 4.16.0.dev0
56
+ - Pytorch 1.10.1+cu102
57
+ - Datasets 1.17.1.dev0
58
+ - Tokenizers 0.11.0
59
+