Steven Liu commited on
Commit
a6c3b2c
1 Parent(s): ba715ff

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -13
README.md CHANGED
@@ -16,8 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 0.8540
20
- - Wer: 1.0
21
 
22
  ## Model description
23
 
@@ -36,30 +36,31 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 0.0001
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
 
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
- - lr_scheduler_warmup_ratio: 0.1
46
- - num_epochs: 5
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
50
 
51
- | Training Loss | Epoch | Step | Validation Loss | Wer |
52
- |:-------------:|:-----:|:----:|:---------------:|:---:|
53
- | 2.6816 | 1.0 | 54 | 2.2369 | 1.0 |
54
- | 1.1823 | 2.0 | 108 | 1.0223 | 1.0 |
55
- | 1.0363 | 3.0 | 162 | 0.9690 | 1.0 |
56
- | 0.896 | 4.0 | 216 | 0.9106 | 1.0 |
57
- | 0.8138 | 5.0 | 270 | 0.8540 | 1.0 |
58
 
59
 
60
  ### Framework versions
61
 
62
  - Transformers 4.25.0.dev0
63
  - Pytorch 1.12.1+cu113
64
- - Datasets 2.6.1
65
  - Tokenizers 0.13.2
 
16
 
17
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 2.8796
20
+ - Wer: 0.9833
21
 
22
  ## Model description
23
 
 
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
+ - learning_rate: 1e-05
40
  - train_batch_size: 8
41
  - eval_batch_size: 8
42
  - seed: 42
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 16
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 500
48
+ - training_steps: 4000
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
54
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
55
+ | 2.8346 | 200.0 | 1000 | 2.9858 | 1.0 |
56
+ | 2.4286 | 400.0 | 2000 | 2.8819 | 0.9958 |
57
+ | 2.133 | 600.0 | 3000 | 2.9096 | 0.9792 |
58
+ | 2.0029 | 800.0 | 4000 | 2.8796 | 0.9833 |
 
59
 
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.25.0.dev0
64
  - Pytorch 1.12.1+cu113
65
+ - Datasets 2.7.0
66
  - Tokenizers 0.13.2