Amiran13 commited on
Commit
318bafb
1 Parent(s): 9e3b139

Model save

Browse files
Files changed (1) hide show
  1. README.md +13 -3
README.md CHANGED
@@ -14,6 +14,14 @@ should probably proofread and complete it, then remove this comment. -->
14
  # wav2vec2-large-xlsr-georgian-demo
15
 
16
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
 
 
 
 
 
 
 
 
17
 
18
  ## Model description
19
 
@@ -32,14 +40,16 @@ More information needed
32
  ### Training hyperparameters
33
 
34
  The following hyperparameters were used during training:
35
- - learning_rate: 0.003
36
- - train_batch_size: 32
37
  - eval_batch_size: 16
38
  - seed: 42
 
 
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
  - lr_scheduler_warmup_steps: 200
42
- - num_epochs: 10
43
 
44
  ### Framework versions
45
 
 
14
  # wav2vec2-large-xlsr-georgian-demo
15
 
16
  This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - eval_loss: 0.1430
19
+ - eval_wer: 0.2912
20
+ - eval_runtime: 1147.4753
21
+ - eval_samples_per_second: 10.181
22
+ - eval_steps_per_second: 0.637
23
+ - epoch: 20.39
24
+ - step: 37200
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 5e-05
44
+ - train_batch_size: 16
45
  - eval_batch_size: 16
46
  - seed: 42
47
+ - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 64
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_steps: 200
52
+ - num_epochs: 22
53
 
54
  ### Framework versions
55