huyue012 commited on
Commit
eb990c5
1 Parent(s): 6afc2d1

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -15
README.md CHANGED
@@ -14,8 +14,8 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.4607
18
- - Wer: 0.1903
19
 
20
  ## Model description
21
 
@@ -35,7 +35,7 @@ More information needed
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
- - train_batch_size: 32
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
@@ -46,21 +46,42 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss | Wer |
50
- |:-------------:|:-----:|:----:|:---------------:|:------:|
51
- | 0.0689 | 3.45 | 500 | 0.3667 | 0.1970 |
52
- | 0.0758 | 6.9 | 1000 | 0.4734 | 0.2154 |
53
- | 0.0581 | 10.34 | 1500 | 0.4375 | 0.2179 |
54
- | 0.0468 | 13.79 | 2000 | 0.4866 | 0.2112 |
55
- | 0.0408 | 17.24 | 2500 | 0.5170 | 0.2043 |
56
- | 0.0365 | 20.69 | 3000 | 0.5125 | 0.1987 |
57
- | 0.0323 | 24.14 | 3500 | 0.4754 | 0.2008 |
58
- | 0.0297 | 27.59 | 4000 | 0.4607 | 0.1903 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.11.3
64
- - Pytorch 1.10.0+cu102
65
- - Datasets 1.14.0
66
  - Tokenizers 0.10.3
14
 
15
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.4888
18
+ - Wer: 0.3315
19
 
20
  ## Model description
21
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
+ - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
 
47
  ### Training results
48
 
49
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
50
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
51
+ | 3.7674 | 1.0 | 500 | 2.8994 | 1.0 |
52
+ | 1.3538 | 2.01 | 1000 | 0.5623 | 0.5630 |
53
+ | 0.5416 | 3.01 | 1500 | 0.4595 | 0.4765 |
54
+ | 0.3563 | 4.02 | 2000 | 0.4435 | 0.4328 |
55
+ | 0.2869 | 5.02 | 2500 | 0.4035 | 0.4145 |
56
+ | 0.2536 | 6.02 | 3000 | 0.4090 | 0.3945 |
57
+ | 0.2072 | 7.03 | 3500 | 0.4188 | 0.3809 |
58
+ | 0.1825 | 8.03 | 4000 | 0.4139 | 0.3865 |
59
+ | 0.1754 | 9.04 | 4500 | 0.4320 | 0.3763 |
60
+ | 0.1477 | 10.04 | 5000 | 0.4668 | 0.3699 |
61
+ | 0.1418 | 11.04 | 5500 | 0.4439 | 0.3683 |
62
+ | 0.1207 | 12.05 | 6000 | 0.4419 | 0.3678 |
63
+ | 0.115 | 13.05 | 6500 | 0.4606 | 0.3786 |
64
+ | 0.1022 | 14.06 | 7000 | 0.4403 | 0.3610 |
65
+ | 0.1019 | 15.06 | 7500 | 0.4966 | 0.3609 |
66
+ | 0.0898 | 16.06 | 8000 | 0.4675 | 0.3586 |
67
+ | 0.0824 | 17.07 | 8500 | 0.4844 | 0.3583 |
68
+ | 0.0737 | 18.07 | 9000 | 0.4801 | 0.3534 |
69
+ | 0.076 | 19.08 | 9500 | 0.4945 | 0.3529 |
70
+ | 0.0627 | 20.08 | 10000 | 0.4700 | 0.3417 |
71
+ | 0.0723 | 21.08 | 10500 | 0.4630 | 0.3449 |
72
+ | 0.0597 | 22.09 | 11000 | 0.5164 | 0.3456 |
73
+ | 0.0566 | 23.09 | 11500 | 0.4957 | 0.3401 |
74
+ | 0.0453 | 24.1 | 12000 | 0.5032 | 0.3419 |
75
+ | 0.0492 | 25.1 | 12500 | 0.5391 | 0.3387 |
76
+ | 0.0524 | 26.1 | 13000 | 0.5057 | 0.3348 |
77
+ | 0.0381 | 27.11 | 13500 | 0.5098 | 0.3331 |
78
+ | 0.0402 | 28.11 | 14000 | 0.5087 | 0.3353 |
79
+ | 0.0358 | 29.12 | 14500 | 0.4888 | 0.3315 |
80
 
81
 
82
  ### Framework versions
83
 
84
  - Transformers 4.11.3
85
+ - Pytorch 1.10.0+cu111
86
+ - Datasets 1.13.3
87
  - Tokenizers 0.10.3