Doogie commited on
Commit
95cfbe8
1 Parent(s): f2130b6

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -39
README.md CHANGED
@@ -11,10 +11,10 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # Waynehills-STT-doogie-server
13
 
14
- This model is a fine-tuned version of [Doogie/Waynehills-STT-doogie](https://huggingface.co/Doogie/Waynehills-STT-doogie) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - Loss: 10.3564
17
- - Wer: 1.0405
18
 
19
  ## Model description
20
 
@@ -34,54 +34,82 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
- - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 1000
43
- - num_epochs: 60
44
 
45
  ### Training results
46
 
47
- | Training Loss | Epoch | Step | Validation Loss | Wer |
48
- |:-------------:|:-----:|:-----:|:---------------:|:------:|
49
- | 4.6722 | 1.92 | 1000 | 5.5301 | 1.0 |
50
- | 4.3024 | 3.84 | 2000 | 6.4368 | 1.0 |
51
- | 3.8135 | 5.76 | 3000 | 6.9063 | 1.0 |
52
- | 3.4163 | 7.68 | 4000 | 6.9737 | 1.0018 |
53
- | 3.1162 | 9.6 | 5000 | 7.1260 | 1.0027 |
54
- | 2.8724 | 11.52 | 6000 | 7.2143 | 1.0009 |
55
- | 2.6694 | 13.44 | 7000 | 7.4370 | 1.0050 |
56
- | 2.4808 | 15.36 | 8000 | 7.9850 | 1.0090 |
57
- | 2.2994 | 17.27 | 9000 | 8.1296 | 1.0198 |
58
- | 2.1436 | 19.19 | 10000 | 8.1327 | 1.0081 |
59
- | 2.0331 | 21.11 | 11000 | 8.2656 | 1.0135 |
60
- | 1.9278 | 23.03 | 12000 | 8.5640 | 1.0176 |
61
- | 1.8417 | 24.95 | 13000 | 8.5057 | 1.0212 |
62
- | 1.7496 | 26.87 | 14000 | 8.8110 | 1.0207 |
63
- | 1.6494 | 28.79 | 15000 | 9.0795 | 1.0306 |
64
- | 1.5882 | 30.71 | 16000 | 9.1341 | 1.0338 |
65
- | 1.5279 | 32.63 | 17000 | 9.2713 | 1.0284 |
66
- | 1.4712 | 34.55 | 18000 | 9.3591 | 1.0333 |
67
- | 1.4065 | 36.47 | 19000 | 9.4739 | 1.0293 |
68
- | 1.3637 | 38.39 | 20000 | 9.6498 | 1.0351 |
69
- | 1.3024 | 40.31 | 21000 | 9.7285 | 1.0365 |
70
- | 1.2737 | 42.23 | 22000 | 9.7353 | 1.0329 |
71
- | 1.2459 | 44.15 | 23000 | 10.0423 | 1.0374 |
72
- | 1.2079 | 46.07 | 24000 | 10.1164 | 1.0419 |
73
- | 1.1791 | 47.98 | 25000 | 10.1437 | 1.0437 |
74
- | 1.1593 | 49.9 | 26000 | 10.2292 | 1.0446 |
75
- | 1.1512 | 51.82 | 27000 | 10.2338 | 1.0405 |
76
- | 1.1041 | 53.74 | 28000 | 10.3070 | 1.0459 |
77
- | 1.1064 | 55.66 | 29000 | 10.3700 | 1.0419 |
78
- | 1.0783 | 57.58 | 30000 | 10.3950 | 1.0455 |
79
- | 1.0762 | 59.5 | 31000 | 10.3564 | 1.0405 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
 
81
 
82
  ### Framework versions
83
 
84
  - Transformers 4.12.5
85
  - Pytorch 1.10.0+cu113
86
- - Datasets 1.16.1
87
  - Tokenizers 0.10.3
 
11
 
12
  # Waynehills-STT-doogie-server
13
 
14
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Loss: 3.9322
17
+ - Wer: 1.0368
18
 
19
  ## Model description
20
 
 
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0001
37
+ - train_batch_size: 8
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 1000
43
+ - num_epochs: 30
44
 
45
  ### Training results
46
 
47
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
48
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
49
+ | 1.8987 | 0.51 | 100 | 3.9322 | 1.0368 |
50
+ | 1.9171 | 1.01 | 200 | 3.9322 | 1.0368 |
51
+ | 1.9058 | 1.52 | 300 | 3.9322 | 1.0368 |
52
+ | 1.9037 | 2.02 | 400 | 3.9322 | 1.0368 |
53
+ | 1.9079 | 2.53 | 500 | 3.9322 | 1.0368 |
54
+ | 1.8788 | 3.03 | 600 | 3.9322 | 1.0368 |
55
+ | 1.8973 | 3.54 | 700 | 3.9322 | 1.0368 |
56
+ | 1.9031 | 4.04 | 800 | 3.9322 | 1.0368 |
57
+ | 1.8966 | 4.55 | 900 | 3.9322 | 1.0368 |
58
+ | 1.9092 | 5.05 | 1000 | 3.9322 | 1.0368 |
59
+ | 1.9158 | 5.56 | 1100 | 3.9322 | 1.0368 |
60
+ | 1.89 | 6.06 | 1200 | 3.9322 | 1.0368 |
61
+ | 1.916 | 6.57 | 1300 | 3.9322 | 1.0368 |
62
+ | 1.8684 | 7.07 | 1400 | 3.9322 | 1.0368 |
63
+ | 1.8885 | 7.58 | 1500 | 3.9322 | 1.0368 |
64
+ | 1.9335 | 8.08 | 1600 | 3.9322 | 1.0368 |
65
+ | 1.9112 | 8.59 | 1700 | 3.9322 | 1.0368 |
66
+ | 1.8794 | 9.09 | 1800 | 3.9322 | 1.0368 |
67
+ | 1.9062 | 9.6 | 1900 | 3.9322 | 1.0368 |
68
+ | 1.9048 | 10.1 | 2000 | 3.9322 | 1.0368 |
69
+ | 1.917 | 10.61 | 2100 | 3.9322 | 1.0368 |
70
+ | 1.8809 | 11.11 | 2200 | 3.9322 | 1.0368 |
71
+ | 1.9101 | 11.62 | 2300 | 3.9322 | 1.0368 |
72
+ | 1.8867 | 12.12 | 2400 | 3.9322 | 1.0368 |
73
+ | 1.9188 | 12.63 | 2500 | 3.9322 | 1.0368 |
74
+ | 1.8933 | 13.13 | 2600 | 3.9322 | 1.0368 |
75
+ | 1.8846 | 13.64 | 2700 | 3.9322 | 1.0368 |
76
+ | 1.9327 | 14.14 | 2800 | 3.9322 | 1.0368 |
77
+ | 1.9041 | 14.65 | 2900 | 3.9322 | 1.0368 |
78
+ | 1.8733 | 15.15 | 3000 | 3.9322 | 1.0368 |
79
+ | 1.9246 | 15.66 | 3100 | 3.9322 | 1.0368 |
80
+ | 1.8925 | 16.16 | 3200 | 3.9322 | 1.0368 |
81
+ | 1.9066 | 16.67 | 3300 | 3.9322 | 1.0368 |
82
+ | 1.8991 | 17.17 | 3400 | 3.9322 | 1.0368 |
83
+ | 1.899 | 17.68 | 3500 | 3.9322 | 1.0368 |
84
+ | 1.9003 | 18.18 | 3600 | 3.9322 | 1.0368 |
85
+ | 1.9131 | 18.69 | 3700 | 3.9322 | 1.0368 |
86
+ | 1.9141 | 19.19 | 3800 | 3.9322 | 1.0368 |
87
+ | 1.9074 | 19.7 | 3900 | 3.9322 | 1.0368 |
88
+ | 1.9308 | 20.2 | 4000 | 3.9322 | 1.0368 |
89
+ | 1.876 | 20.71 | 4100 | 3.9322 | 1.0368 |
90
+ | 1.9263 | 21.21 | 4200 | 3.9322 | 1.0368 |
91
+ | 1.8956 | 21.72 | 4300 | 3.9322 | 1.0368 |
92
+ | 1.9114 | 22.22 | 4400 | 3.9322 | 1.0368 |
93
+ | 1.9189 | 22.73 | 4500 | 3.9322 | 1.0368 |
94
+ | 1.889 | 23.23 | 4600 | 3.9322 | 1.0368 |
95
+ | 1.9065 | 23.74 | 4700 | 3.9322 | 1.0368 |
96
+ | 1.9151 | 24.24 | 4800 | 3.9322 | 1.0368 |
97
+ | 1.9059 | 24.75 | 4900 | 3.9322 | 1.0368 |
98
+ | 1.8875 | 25.25 | 5000 | 3.9322 | 1.0368 |
99
+ | 1.9123 | 25.76 | 5100 | 3.9322 | 1.0368 |
100
+ | 1.9008 | 26.26 | 5200 | 3.9322 | 1.0368 |
101
+ | 1.9128 | 26.77 | 5300 | 3.9322 | 1.0368 |
102
+ | 1.9026 | 27.27 | 5400 | 3.9322 | 1.0368 |
103
+ | 1.8901 | 27.78 | 5500 | 3.9322 | 1.0368 |
104
+ | 1.9108 | 28.28 | 5600 | 3.9322 | 1.0368 |
105
+ | 1.9004 | 28.79 | 5700 | 3.9322 | 1.0368 |
106
+ | 1.9199 | 29.29 | 5800 | 3.9322 | 1.0368 |
107
+ | 1.8783 | 29.8 | 5900 | 3.9322 | 1.0368 |
108
 
109
 
110
  ### Framework versions
111
 
112
  - Transformers 4.12.5
113
  - Pytorch 1.10.0+cu113
114
+ - Datasets 1.17.0
115
  - Tokenizers 0.10.3