kingabzpro commited on
Commit
ea89a85
1 Parent(s): d2483aa

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -13
README.md CHANGED
@@ -15,9 +15,9 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 8.8609
19
- - Wer: 0.5948
20
- - Cer: 0.3176
21
 
22
  ## Model description
23
 
@@ -36,7 +36,7 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 0.0001
40
  - train_batch_size: 16
41
  - eval_batch_size: 8
42
  - seed: 42
@@ -44,21 +44,20 @@ The following hyperparameters were used during training:
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 100
48
- - num_epochs: 30
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
55
- | 24.6193 | 4.17 | 50 | 8.8884 | 1.4349 | 0.6538 |
56
- | 4.0847 | 8.33 | 100 | 8.9820 | 0.8175 | 0.4775 |
57
- | 2.7909 | 12.5 | 150 | 10.4491 | 0.6559 | 0.4129 |
58
- | 1.8326 | 16.67 | 200 | 8.7698 | 0.6105 | 0.3530 |
59
- | 1.2727 | 20.83 | 250 | 8.7352 | 0.6061 | 0.3302 |
60
- | 1.0649 | 25.0 | 300 | 8.7588 | 0.6079 | 0.3240 |
61
- | 1.0751 | 29.17 | 350 | 8.8609 | 0.5948 | 0.3176 |
62
 
63
 
64
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-urdu-urm-60](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-urdu-urm-60) on the common_voice dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 6.4496
19
+ - Wer: 0.5913
20
+ - Cer: 0.3310
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 0.0003
40
  - train_batch_size: 16
41
  - eval_batch_size: 8
42
  - seed: 42
 
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 200
48
+ - num_epochs: 50
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
54
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
55
+ | 12.6045 | 8.33 | 100 | 8.4997 | 0.6978 | 0.3923 |
56
+ | 1.3367 | 16.67 | 200 | 5.0015 | 0.6515 | 0.3556 |
57
+ | 0.5344 | 25.0 | 300 | 9.3687 | 0.6393 | 0.3625 |
58
+ | 0.2922 | 33.33 | 400 | 9.2381 | 0.6236 | 0.3432 |
59
+ | 0.1867 | 41.67 | 500 | 6.2150 | 0.6035 | 0.3448 |
60
+ | 0.1166 | 50.0 | 600 | 6.4496 | 0.5913 | 0.3310 |
 
61
 
62
 
63
  ### Framework versions