rohitp1 commited on
Commit
8ff2908
1 Parent(s): 3f53616

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - wer
7
+ model-index:
8
+ - name: subhadeep_whisper_small_finetune_teacher_no_noise_libri_100_hours_100_epochs_batch_8
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # subhadeep_whisper_small_finetune_teacher_no_noise_libri_100_hours_100_epochs_batch_8
16
+
17
+ This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.3422
20
+ - Wer: 15.6961
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0005
40
+ - train_batch_size: 4
41
+ - eval_batch_size: 1
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 256
44
+ - total_train_batch_size: 1024
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine_with_restarts
47
+ - lr_scheduler_warmup_ratio: 0.2
48
+ - num_epochs: 100
49
+ - mixed_precision_training: Native AMP
50
+
51
+ ### Training results
52
+
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
54
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
55
+ | 0.3854 | 3.68 | 100 | 0.1916 | 12.3782 |
56
+ | 0.0214 | 7.39 | 200 | 0.2210 | 12.6356 |
57
+ | 0.0209 | 11.11 | 300 | 0.2540 | 13.5455 |
58
+ | 0.0698 | 14.79 | 400 | 0.2788 | 13.9829 |
59
+ | 0.0206 | 18.5 | 500 | 0.3106 | 14.8156 |
60
+ | 0.0236 | 22.22 | 600 | 0.3422 | 15.6961 |
61
+
62
+
63
+ ### Framework versions
64
+
65
+ - Transformers 4.25.1
66
+ - Pytorch 1.12.1
67
+ - Datasets 2.8.0
68
+ - Tokenizers 0.13.2