shivam commited on
Commit
07169b6
1 Parent(s): dace185

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - hi
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_7_0
8
+ - generated_from_trainer
9
+ datasets:
10
+ - common_voice
11
+ model-index:
12
+ - name: ''
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ #
20
+
21
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - HI dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 1.2282
24
+ - Wer: 0.6838
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 7.5e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
+ - seed: 42
47
+ - gradient_accumulation_steps: 4
48
+ - total_train_batch_size: 32
49
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
+ - lr_scheduler_type: linear
51
+ - lr_scheduler_warmup_steps: 2000
52
+ - num_epochs: 50.0
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
58
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
59
+ | 5.3155 | 3.4 | 500 | 4.5582 | 1.0 |
60
+ | 3.3369 | 6.8 | 1000 | 3.4269 | 1.0 |
61
+ | 2.1785 | 10.2 | 1500 | 1.7191 | 0.8831 |
62
+ | 1.579 | 13.6 | 2000 | 1.3604 | 0.7647 |
63
+ | 1.3773 | 17.01 | 2500 | 1.2737 | 0.7519 |
64
+ | 1.3165 | 20.41 | 3000 | 1.2457 | 0.7401 |
65
+ | 1.2274 | 23.81 | 3500 | 1.3617 | 0.7301 |
66
+ | 1.1787 | 27.21 | 4000 | 1.2068 | 0.7010 |
67
+ | 1.1467 | 30.61 | 4500 | 1.2416 | 0.6946 |
68
+ | 1.0801 | 34.01 | 5000 | 1.2312 | 0.6990 |
69
+ | 1.0709 | 37.41 | 5500 | 1.2984 | 0.7138 |
70
+ | 1.0307 | 40.81 | 6000 | 1.2049 | 0.6871 |
71
+ | 1.0003 | 44.22 | 6500 | 1.1956 | 0.6841 |
72
+ | 1.004 | 47.62 | 7000 | 1.2101 | 0.6793 |
73
+
74
+
75
+ ### Framework versions
76
+
77
+ - Transformers 4.16.0.dev0
78
+ - Pytorch 1.10.1+cu113
79
+ - Datasets 1.18.1.dev0
80
+ - Tokenizers 0.11.0