kpriyanshu256 commited on
Commit
6cd7ba5
1 Parent(s): aee8cb8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - as
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - mozilla-foundation/common_voice_11_0
10
+ - google/fleurs
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn-Assamese
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice 11.0
21
+ type: mozilla-foundation/common_voice_11_0
22
+ config: as
23
+ split: test
24
+ args: as
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 17.560007218913555
29
+ - task:
30
+ name: Automatic Speech Recognition
31
+ type: automatic-speech-recognition
32
+ dataset:
33
+ name: FLEURS
34
+ type: google/fleurs
35
+ metrics:
36
+ - name: Wer
37
+ type: wer
38
+ value: 17.560007218913555
39
+ ---
40
+
41
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
42
+ should probably proofread and complete it, then remove this comment. -->
43
+
44
+ # kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn-Assamese
45
+
46
+ This model is a fine-tuned version of [kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn](https://huggingface.co/kpriyanshu256/whisper-large-v2-as-600-32-1e-05-bn) on the Common Voice 11.0 and the FLEURS datasets.
47
+ It achieves the following results on the evaluation set:
48
+ - Loss: 0.2486
49
+ - Wer: 17.5600
50
+
51
+ ## Model description
52
+
53
+ More information needed
54
+
55
+ ## Intended uses & limitations
56
+
57
+ More information needed
58
+
59
+ ## Training and evaluation data
60
+
61
+ More information needed
62
+
63
+ ## Training procedure
64
+
65
+ ### Training hyperparameters
66
+
67
+ The following hyperparameters were used during training:
68
+ - learning_rate: 1e-05
69
+ - train_batch_size: 16
70
+ - eval_batch_size: 8
71
+ - seed: 42
72
+ - distributed_type: multi-GPU
73
+ - gradient_accumulation_steps: 2
74
+ - total_train_batch_size: 32
75
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
76
+ - lr_scheduler_type: linear
77
+ - lr_scheduler_warmup_steps: 50
78
+ - training_steps: 1000
79
+
80
+ ### Training results
81
+
82
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
83
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
84
+ | 0.1273 | 0.1 | 100 | 0.1737 | 20.8988 |
85
+ | 0.0811 | 0.2 | 200 | 0.1739 | 19.0038 |
86
+ | 0.0638 | 0.3 | 300 | 0.1823 | 18.4804 |
87
+ | 0.0404 | 1.05 | 400 | 0.1893 | 17.1810 |
88
+ | 0.0316 | 1.15 | 500 | 0.2067 | 17.0186 |
89
+ | 0.027 | 1.25 | 600 | 0.2081 | 17.7405 |
90
+ | 0.025 | 2.01 | 700 | 0.2213 | 17.7585 |
91
+ | 0.0213 | 2.11 | 800 | 0.2237 | 17.8488 |
92
+ | 0.0176 | 2.21 | 900 | 0.2390 | 16.7479 |
93
+ | 0.0184 | 2.31 | 1000 | 0.2486 | 17.5600 |
94
+
95
+
96
+ ### Framework versions
97
+
98
+ - Transformers 4.26.0.dev0
99
+ - Pytorch 1.13.0+cu117
100
+ - Datasets 2.7.1.dev0
101
+ - Tokenizers 0.13.2