mprzibilla commited on
Commit
345c7a5
1 Parent(s): e8f94fb

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -19
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  model-index:
@@ -12,10 +11,10 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # super_large_finetune_CM01
14
 
15
- This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 2.5380
18
- - Wer: 1.0
19
 
20
  ## Model description
21
 
@@ -35,29 +34,29 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
- - train_batch_size: 20
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
- - lr_scheduler_warmup_steps: 16065
44
- - num_epochs: 100
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss | Wer |
50
- |:-------------:|:-----:|:------:|:---------------:|:---:|
51
- | 13.2507 | 10.0 | 32130 | 2.7423 | 1.0 |
52
- | 2.0325 | 20.0 | 64260 | 2.6040 | 1.0 |
53
- | 1.9596 | 30.0 | 96390 | 2.5728 | 1.0 |
54
- | 1.9302 | 40.0 | 128520 | 2.5720 | 1.0 |
55
- | 1.9144 | 50.0 | 160650 | 2.5551 | 1.0 |
56
- | 1.9043 | 60.0 | 192780 | 2.5536 | 1.0 |
57
- | 1.8969 | 70.0 | 224910 | 2.5371 | 1.0 |
58
- | 1.8927 | 80.0 | 257040 | 2.5431 | 1.0 |
59
- | 1.8904 | 90.0 | 289170 | 2.5383 | 1.0 |
60
- | 1.8892 | 100.0 | 321300 | 2.5380 | 1.0 |
61
 
62
 
63
  ### Framework versions
 
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
 
11
 
12
  # super_large_finetune_CM01
13
 
14
+ This model was trained from scratch on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 7.2285
17
+ - Wer: 0.7714
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
+ - train_batch_size: 15
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_steps: 857
43
+ - num_epochs: 50
44
  - mixed_precision_training: Native AMP
45
 
46
  ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
49
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
50
+ | 1.0031 | 5.0 | 1715 | 1.9766 | 0.7857 |
51
+ | 0.2107 | 10.0 | 3430 | 3.8748 | 0.8238 |
52
+ | 0.1393 | 15.0 | 5145 | 4.7403 | 0.7952 |
53
+ | 0.0931 | 20.0 | 6860 | 3.5077 | 0.6667 |
54
+ | 0.0649 | 25.0 | 8575 | 7.7419 | 0.9333 |
55
+ | 0.0592 | 30.0 | 10290 | 5.6440 | 0.7762 |
56
+ | 0.0396 | 35.0 | 12005 | 6.9629 | 0.6810 |
57
+ | 0.03 | 40.0 | 13720 | 7.8282 | 0.7524 |
58
+ | 0.0191 | 45.0 | 15435 | 6.4626 | 0.7429 |
59
+ | 0.0121 | 50.0 | 17150 | 7.2285 | 0.7714 |
60
 
61
 
62
  ### Framework versions