kingabzpro commited on
Commit
d49441f
1 Parent(s): 6104b4a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -49
README.md CHANGED
@@ -1,41 +1,11 @@
1
  ---
2
- language:
3
- - ur
4
-
5
- license: apache-2.0
6
  tags:
7
- - automatic-speech-recognition
8
- - robust-speech-event
9
  datasets:
10
  - common_voice
11
- metrics:
12
- - wer
13
  model-index:
14
  - name: wav2vec2-large-xlsr-53-urdu
15
- results:
16
- - task:
17
- type: automatic-speech-recognition # Required. Example: automatic-speech-recognition
18
- name: Urdu Speech Recognition # Optional. Example: Speech Recognition
19
- dataset:
20
- type: common_voice # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
21
- name: Urdu # Required. Example: Common Voice zh-CN
22
- args: ur # Optional. Example: zh-CN
23
- metrics:
24
- - type: wer # Required. Example: wer
25
- value: 100 # Required. Example: 20.90
26
- name: Test WER # Optional. Example: Test WER
27
- args:
28
- - learning_rate: 0.0003
29
- - train_batch_size: 16
30
- - eval_batch_size: 8
31
- - seed: 42
32
- - gradient_accumulation_steps: 2
33
- - total_train_batch_size: 32
34
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
35
- - lr_scheduler_type: linear
36
- - lr_scheduler_warmup_steps: 10
37
- - num_epochs: 30
38
- - mixed_precision_training: Native AMP # Optional. Example for BLEU: max_order
39
  ---
40
 
41
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -43,10 +13,11 @@ should probably proofread and complete it, then remove this comment. -->
43
 
44
  # wav2vec2-large-xlsr-53-urdu
45
 
46
- This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
47
  It achieves the following results on the evaluation set:
48
- - Loss: 2.6772
49
- - Wer: 1.0
 
50
 
51
  ## Model description
52
 
@@ -73,28 +44,25 @@ The following hyperparameters were used during training:
73
  - total_train_batch_size: 32
74
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
  - lr_scheduler_type: linear
76
- - lr_scheduler_warmup_steps: 10
77
- - num_epochs: 30
78
  - mixed_precision_training: Native AMP
79
 
80
  ### Training results
81
 
82
- | Training Loss | Epoch | Step | Validation Loss | Wer |
83
- |:-------------:|:-----:|:----:|:---------------:|:---:|
84
- | 11.1125 | 3.33 | 40 | 3.2875 | 1.0 |
85
- | 3.2077 | 6.67 | 80 | 3.1499 | 1.0 |
86
- | 3.1725 | 10.0 | 120 | 3.1484 | 1.0 |
87
- | 3.148 | 13.33 | 160 | 3.0948 | 1.0 |
88
- | 3.1098 | 16.67 | 200 | 3.0897 | 1.0 |
89
- | 3.085 | 20.0 | 240 | 3.0609 | 1.0 |
90
- | 3.0315 | 23.33 | 280 | 2.9636 | 1.0 |
91
- | 2.9038 | 26.67 | 320 | 2.7838 | 1.0 |
92
- | 2.7599 | 30.0 | 360 | 2.6772 | 1.0 |
93
 
94
 
95
  ### Framework versions
96
 
97
- - Transformers 4.11.3
98
  - Pytorch 1.10.0+cu111
99
  - Datasets 1.17.0
100
  - Tokenizers 0.10.3
 
1
  ---
 
 
 
 
2
  tags:
3
+ - generated_from_trainer
 
4
  datasets:
5
  - common_voice
 
 
6
  model-index:
7
  - name: wav2vec2-large-xlsr-53-urdu
8
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
13
 
14
  # wav2vec2-large-xlsr-53-urdu
15
 
16
+ This model is a fine-tuned version of [m3hrdadfi/wav2vec2-large-xlsr-persian-v3](https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-persian-v3) on the common_voice dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 1.5727
19
+ - Wer: 0.6620
20
+ - Cer: 0.3166
21
 
22
  ## Model description
23
 
 
44
  - total_train_batch_size: 32
45
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
  - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 200
48
+ - num_epochs: 50
49
  - mixed_precision_training: Native AMP
50
 
51
  ### Training results
52
 
53
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
54
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
55
+ | 2.9707 | 8.33 | 100 | 1.2689 | 0.8463 | 0.4373 |
56
+ | 0.746 | 16.67 | 200 | 1.2370 | 0.7214 | 0.3486 |
57
+ | 0.3719 | 25.0 | 300 | 1.3885 | 0.6908 | 0.3381 |
58
+ | 0.2411 | 33.33 | 400 | 1.4780 | 0.6690 | 0.3186 |
59
+ | 0.1841 | 41.67 | 500 | 1.5557 | 0.6629 | 0.3241 |
60
+ | 0.165 | 50.0 | 600 | 1.5727 | 0.6620 | 0.3166 |
 
 
 
61
 
62
 
63
  ### Framework versions
64
 
65
+ - Transformers 4.15.0
66
  - Pytorch 1.10.0+cu111
67
  - Datasets 1.17.0
68
  - Tokenizers 0.10.3