arampacha commited on
Commit
d1e8d3e
1 Parent(s): da73c48

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - hy
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - mozilla-foundation/common_voice_11_0
10
+ - google/fleurs
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: whisper-base-hy
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice 11.0
21
+ type: mozilla-foundation/common_voice_11_0
22
+ config: hy-AM
23
+ split: test
24
+ args: hy-AM
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 22.36842105263158
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # whisper-base-hy
35
+
36
+ This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Common Voice 11.0 dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.2204
39
+ - Wer: 22.3684
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 4
60
+ - eval_batch_size: 4
61
+ - seed: 42
62
+ - gradient_accumulation_steps: 16
63
+ - total_train_batch_size: 64
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - lr_scheduler_warmup_steps: 200
67
+ - training_steps: 2000
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Training results
71
+
72
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
73
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
74
+ | 0.1394 | 5.87 | 400 | 0.1780 | 28.2895 |
75
+ | 0.0536 | 11.75 | 800 | 0.1739 | 24.6053 |
76
+ | 0.0247 | 17.64 | 1200 | 0.2098 | 22.9605 |
77
+ | 0.0154 | 23.52 | 1600 | 0.2035 | 22.1382 |
78
+ | 0.0103 | 29.41 | 2000 | 0.2204 | 22.3684 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.26.0.dev0
84
+ - Pytorch 1.13.0+cu117
85
+ - Datasets 2.7.1.dev0
86
+ - Tokenizers 0.13.2