zasheza commited on
Commit
180c977
1 Parent(s): 8d2c223

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -12
README.md CHANGED
@@ -12,10 +12,10 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # wav2vec2-base-timit-demo-colab-1
14
 
15
- This model is a fine-tuned version of [zasheza/wav2vec2-base-timit-demo-colab-1](https://huggingface.co/zasheza/wav2vec2-base-timit-demo-colab-1) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
- - Loss: 0.8811
18
- - Wer: 0.4169
19
 
20
  ## Model description
21
 
@@ -34,27 +34,29 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 0.0001
38
  - train_batch_size: 6
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 800
44
- - num_epochs: 40
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Wer |
50
  |:-------------:|:-----:|:----:|:---------------:|:------:|
51
- | 0.161 | 5.26 | 500 | 0.7465 | 0.4496 |
52
- | 0.1852 | 10.53 | 1000 | 0.8108 | 0.4739 |
53
- | 0.1457 | 15.79 | 1500 | 0.9073 | 0.4600 |
54
- | 0.1073 | 21.05 | 2000 | 0.8817 | 0.4486 |
55
- | 0.085 | 26.32 | 2500 | 0.9262 | 0.4442 |
56
- | 0.0753 | 31.58 | 3000 | 0.8838 | 0.4337 |
57
- | 0.0647 | 36.84 | 3500 | 0.8811 | 0.4169 |
 
 
58
 
59
 
60
  ### Framework versions
 
12
 
13
  # wav2vec2-base-timit-demo-colab-1
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 0.9634
18
+ - Wer: 0.4398
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 0.0002
38
  - train_batch_size: 6
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - lr_scheduler_warmup_steps: 800
44
+ - num_epochs: 50
45
  - mixed_precision_training: Native AMP
46
 
47
  ### Training results
48
 
49
  | Training Loss | Epoch | Step | Validation Loss | Wer |
50
  |:-------------:|:-----:|:----:|:---------------:|:------:|
51
+ | 4.8991 | 5.26 | 500 | 1.4319 | 0.7522 |
52
+ | 0.8555 | 10.53 | 1000 | 0.7895 | 0.5818 |
53
+ | 0.4584 | 15.79 | 1500 | 0.7198 | 0.5211 |
54
+ | 0.3096 | 21.05 | 2000 | 0.7983 | 0.5118 |
55
+ | 0.2165 | 26.32 | 2500 | 0.7893 | 0.4745 |
56
+ | 0.163 | 31.58 | 3000 | 0.8779 | 0.4589 |
57
+ | 0.1144 | 36.84 | 3500 | 0.9256 | 0.4540 |
58
+ | 0.0886 | 42.11 | 4000 | 0.9184 | 0.4530 |
59
+ | 0.0668 | 47.37 | 4500 | 0.9634 | 0.4398 |
60
 
61
 
62
  ### Framework versions