DylanonWic commited on
Commit
908a208
1 Parent(s): dc4d414

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -5
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
@@ -11,7 +12,16 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # wav2vec2-large-asr-th-2
13
 
14
- This model was trained from scratch on the None dataset.
 
 
 
 
 
 
 
 
 
15
 
16
  ## Model description
17
 
@@ -34,12 +44,12 @@ The following hyperparameters were used during training:
34
  - train_batch_size: 16
35
  - eval_batch_size: 8
36
  - seed: 42
37
- - gradient_accumulation_steps: 2
38
- - total_train_batch_size: 32
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: linear
41
- - lr_scheduler_warmup_steps: 200
42
- - training_steps: 4000
43
  - mixed_precision_training: Native AMP
44
 
45
  ### Framework versions
 
1
  ---
2
+ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  model-index:
 
12
 
13
  # wav2vec2-large-asr-th-2
14
 
15
+ This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - eval_loss: 0.3239
18
+ - eval_wer: 0.4763
19
+ - eval_cer: 0.1331
20
+ - eval_runtime: 749.146
21
+ - eval_samples_per_second: 13.562
22
+ - eval_steps_per_second: 1.695
23
+ - epoch: 0.95
24
+ - step: 4000
25
 
26
  ## Model description
27
 
 
44
  - train_batch_size: 16
45
  - eval_batch_size: 8
46
  - seed: 42
47
+ - gradient_accumulation_steps: 3
48
+ - total_train_batch_size: 48
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 800
52
+ - training_steps: 5000
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Framework versions