DrishtiSharma commited on
Commit
499beb2
1 Parent(s): 890d52b

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -0
README.md ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - as
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_8_0
8
+ - generated_from_trainer
9
+ - as
10
+ - robust-speech-event
11
+ - model_for_talk
12
+ datasets:
13
+ - common_voice
14
+ model-index:
15
+ - name: wav2vec2-large-xls-r-300m-as-with-LM-v2
16
+ results:
17
+ - task:
18
+ name: Automatic Speech Recognition
19
+ type: automatic-speech-recognition
20
+ dataset:
21
+ name: Common Voice 8
22
+ type: mozilla-foundation/common_voice_8_0
23
+ args: hsb
24
+ metrics:
25
+ - name: Test WER
26
+ type: wer
27
+ value: []
28
+ - name: Test CER
29
+ type: cer
30
+ value: []
31
+ ---
32
+
33
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
34
+ should probably proofread and complete it, then remove this comment. -->
35
+
36
+ # wav2vec2-large-xls-r-300m-as-v9
37
+
38
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
39
+ It achieves the following results on the evaluation set:
40
+ - Loss: 1.1679
41
+ - Wer: 0.5761
42
+
43
+ ## Model description
44
+
45
+ More information needed
46
+
47
+ ## Intended uses & limitations
48
+
49
+ More information needed
50
+
51
+ ## Training and evaluation data
52
+
53
+ More information needed
54
+
55
+ ## Training procedure
56
+
57
+ ### Training hyperparameters
58
+
59
+ The following hyperparameters were used during training:
60
+ - learning_rate: 0.000111
61
+ - train_batch_size: 16
62
+ - eval_batch_size: 8
63
+ - seed: 42
64
+ - gradient_accumulation_steps: 2
65
+ - total_train_batch_size: 32
66
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
67
+ - lr_scheduler_type: linear
68
+ - lr_scheduler_warmup_steps: 300
69
+ - num_epochs: 200
70
+ - mixed_precision_training: Native AMP
71
+
72
+ ### Training results
73
+
74
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
75
+ |:-------------:|:------:|:----:|:---------------:|:------:|
76
+ | 8.3852 | 10.51 | 200 | 3.6402 | 1.0 |
77
+ | 3.5374 | 21.05 | 400 | 3.3894 | 1.0 |
78
+ | 2.8645 | 31.56 | 600 | 1.3143 | 0.8303 |
79
+ | 1.1784 | 42.1 | 800 | 0.9417 | 0.6661 |
80
+ | 0.7805 | 52.62 | 1000 | 0.9292 | 0.6237 |
81
+ | 0.5973 | 63.15 | 1200 | 0.9489 | 0.6014 |
82
+ | 0.4784 | 73.67 | 1400 | 0.9916 | 0.5962 |
83
+ | 0.4138 | 84.21 | 1600 | 1.0272 | 0.6121 |
84
+ | 0.3491 | 94.72 | 1800 | 1.0412 | 0.5984 |
85
+ | 0.3062 | 105.26 | 2000 | 1.0769 | 0.6005 |
86
+ | 0.2707 | 115.77 | 2200 | 1.0708 | 0.5752 |
87
+ | 0.2459 | 126.31 | 2400 | 1.1285 | 0.6009 |
88
+ | 0.2234 | 136.82 | 2600 | 1.1209 | 0.5949 |
89
+ | 0.2035 | 147.36 | 2800 | 1.1348 | 0.5842 |
90
+ | 0.1876 | 157.87 | 3000 | 1.1480 | 0.5872 |
91
+ | 0.1669 | 168.41 | 3200 | 1.1496 | 0.5838 |
92
+ | 0.1595 | 178.92 | 3400 | 1.1721 | 0.5778 |
93
+ | 0.1505 | 189.46 | 3600 | 1.1654 | 0.5744 |
94
+ | 0.1486 | 199.97 | 3800 | 1.1679 | 0.5761 |
95
+
96
+
97
+ ### Framework versions
98
+
99
+ - Transformers 4.16.1
100
+ - Pytorch 1.10.0+cu111
101
+ - Datasets 1.18.2
102
+ - Tokenizers 0.11.0