infinitejoy commited on
Commit
ae07d7b
1 Parent(s): aac3ebd

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -32
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
 
 
2
  license: apache-2.0
3
  tags:
 
 
4
  - generated_from_trainer
5
  datasets:
6
  - common_voice
@@ -14,10 +18,10 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # wav2vec2-large-xls-r-300m-hindi
16
 
17
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
18
  It achieves the following results on the evaluation set:
19
- - Loss: 2.6718
20
- - Wer: 0.7103
21
 
22
  ## Model description
23
 
@@ -36,45 +40,54 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - learning_rate: 0.0003
40
- - train_batch_size: 16
41
- - eval_batch_size: 8
42
  - seed: 42
43
- - gradient_accumulation_steps: 2
44
- - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
- - lr_scheduler_warmup_steps: 500
48
- - num_epochs: 50
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Wer |
54
- |:-------------:|:-----:|:----:|:---------------:|:------:|
55
- | 5.5682 | 2.72 | 400 | 2.1019 | 0.9188 |
56
- | 0.6506 | 5.44 | 800 | 1.9496 | 0.8048 |
57
- | 0.3249 | 8.16 | 1200 | 1.8901 | 0.7515 |
58
- | 0.222 | 10.88 | 1600 | 1.7736 | 0.7115 |
59
- | 0.171 | 13.6 | 2000 | 2.1061 | 0.7507 |
60
- | 0.1428 | 16.33 | 2400 | 2.2476 | 0.7412 |
61
- | 0.1235 | 19.05 | 2800 | 2.3527 | 0.7554 |
62
- | 0.1076 | 21.77 | 3200 | 2.2145 | 0.7404 |
63
- | 0.0982 | 24.49 | 3600 | 2.3603 | 0.7327 |
64
- | 0.0842 | 27.21 | 4000 | 2.4086 | 0.7465 |
65
- | 0.0732 | 29.93 | 4400 | 2.4182 | 0.7259 |
66
- | 0.0672 | 32.65 | 4800 | 2.5249 | 0.7315 |
67
- | 0.0601 | 35.37 | 5200 | 2.5355 | 0.7207 |
68
- | 0.0534 | 38.09 | 5600 | 2.5170 | 0.7191 |
69
- | 0.0477 | 40.81 | 6000 | 2.6001 | 0.7064 |
70
- | 0.0435 | 43.54 | 6400 | 2.7135 | 0.7142 |
71
- | 0.0374 | 46.26 | 6800 | 2.6552 | 0.7127 |
72
- | 0.0348 | 48.98 | 7200 | 2.6718 | 0.7103 |
 
 
 
 
 
 
 
 
 
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
  - Transformers 4.16.0.dev0
78
- - Pytorch 1.10.0+cu113
79
  - Datasets 1.17.1.dev0
80
- - Tokenizers 0.10.3
 
1
  ---
2
+ language:
3
+ - hi
4
  license: apache-2.0
5
  tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_7_0
8
  - generated_from_trainer
9
  datasets:
10
  - common_voice
 
18
 
19
  # wav2vec2-large-xls-r-300m-hindi
20
 
21
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.5414
24
+ - Wer: 1.0194
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 7.5e-05
44
+ - train_batch_size: 32
45
+ - eval_batch_size: 32
46
  - seed: 42
 
 
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 2000
50
+ - num_epochs: 100.0
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
57
+ | 4.6095 | 3.38 | 500 | 4.5881 | 0.9999 |
58
+ | 3.3396 | 6.76 | 1000 | 3.3301 | 1.0001 |
59
+ | 2.0061 | 10.14 | 1500 | 1.2096 | 1.0063 |
60
+ | 1.523 | 13.51 | 2000 | 0.7836 | 1.0051 |
61
+ | 1.3868 | 16.89 | 2500 | 0.6837 | 1.0080 |
62
+ | 1.2807 | 20.27 | 3000 | 0.6568 | 1.0112 |
63
+ | 1.231 | 23.65 | 3500 | 0.6120 | 1.0105 |
64
+ | 1.1673 | 27.03 | 4000 | 0.5972 | 1.0089 |
65
+ | 1.1416 | 30.41 | 4500 | 0.5780 | 1.0132 |
66
+ | 1.0738 | 33.78 | 5000 | 0.5806 | 1.0123 |
67
+ | 1.0771 | 37.16 | 5500 | 0.5586 | 1.0067 |
68
+ | 1.0287 | 40.54 | 6000 | 0.5464 | 1.0058 |
69
+ | 1.0106 | 43.92 | 6500 | 0.5407 | 1.0062 |
70
+ | 0.9538 | 47.3 | 7000 | 0.5334 | 1.0089 |
71
+ | 0.9607 | 50.68 | 7500 | 0.5395 | 1.0110 |
72
+ | 0.9108 | 54.05 | 8000 | 0.5502 | 1.0137 |
73
+ | 0.9252 | 57.43 | 8500 | 0.5498 | 1.0062 |
74
+ | 0.8943 | 60.81 | 9000 | 0.5448 | 1.0158 |
75
+ | 0.8728 | 64.19 | 9500 | 0.5257 | 1.0113 |
76
+ | 0.8577 | 67.57 | 10000 | 0.5550 | 1.0178 |
77
+ | 0.8332 | 70.95 | 10500 | 0.5607 | 1.0166 |
78
+ | 0.8174 | 74.32 | 11000 | 0.5429 | 1.0145 |
79
+ | 0.8168 | 77.7 | 11500 | 0.5561 | 1.0116 |
80
+ | 0.7872 | 81.08 | 12000 | 0.5478 | 1.0164 |
81
+ | 0.7707 | 84.46 | 12500 | 0.5412 | 1.0216 |
82
+ | 0.7742 | 87.84 | 13000 | 0.5391 | 1.0207 |
83
+ | 0.7594 | 91.22 | 13500 | 0.5379 | 1.0208 |
84
+ | 0.7678 | 94.59 | 14000 | 0.5415 | 1.0198 |
85
+ | 0.7502 | 97.97 | 14500 | 0.5409 | 1.0191 |
86
 
87
 
88
  ### Framework versions
89
 
90
  - Transformers 4.16.0.dev0
91
+ - Pytorch 1.10.1+cu102
92
  - Datasets 1.17.1.dev0
93
+ - Tokenizers 0.11.0