einargizz commited on
Commit
32ec2a9
1 Parent(s): bdaed18

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -21,7 +21,7 @@ model-index:
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 122.36095346197501
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the samromur_children dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 3.4991
35
- - Wer: 122.3610
36
 
37
  ## Model description
38
 
@@ -59,15 +59,18 @@ The following hyperparameters were used during training:
59
  - total_train_batch_size: 64
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
- - lr_scheduler_warmup_steps: 1
63
- - training_steps: 1
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
- | Training Loss | Epoch | Step | Validation Loss | Wer |
69
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
70
- | 3.7033 | 1.0 | 1 | 3.4991 | 122.3610 |
 
 
 
71
 
72
 
73
  ### Framework versions
 
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 45.68657478305258
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the samromur_children dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.6322
35
+ - Wer: 45.6866
36
 
37
  ## Model description
38
 
 
59
  - total_train_batch_size: 64
60
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
  - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 500
63
+ - training_steps: 1000
64
  - mixed_precision_training: Native AMP
65
 
66
  ### Training results
67
 
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
70
+ | 1.4343 | 0.25 | 250 | 1.5134 | 79.0199 |
71
+ | 0.7719 | 0.5 | 500 | 0.8724 | 61.2047 |
72
+ | 0.7181 | 0.75 | 750 | 0.6547 | 47.3201 |
73
+ | 0.5734 | 1.0 | 1000 | 0.6322 | 45.6866 |
74
 
75
 
76
  ### Framework versions