Nithiwat commited on
Commit
cff20dc
1 Parent(s): ab7e46d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -12,7 +12,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # wav2vec2-colab
14
 
15
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
16
 
17
  ## Model description
18
 
@@ -31,12 +31,12 @@ More information needed
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
- - learning_rate: 0.0003
35
- - train_batch_size: 24
36
  - eval_batch_size: 8
37
  - seed: 42
38
  - gradient_accumulation_steps: 2
39
- - total_train_batch_size: 48
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 500
 
12
 
13
  # wav2vec2-colab
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the None dataset.
16
 
17
  ## Model description
18
 
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
+ - learning_rate: 5e-06
35
+ - train_batch_size: 16
36
  - eval_batch_size: 8
37
  - seed: 42
38
  - gradient_accumulation_steps: 2
39
+ - total_train_batch_size: 32
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
  - lr_scheduler_warmup_steps: 500