kingabzpro commited on
Commit
06f71f9
1 Parent(s): 47bb259

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -64
README.md CHANGED
@@ -1,57 +1,12 @@
1
  ---
2
- language:
3
- - pa-IN
4
-
5
- license: apache-2.0
6
  tags:
7
- - automatic-speech-recognition
8
- - robust-speech-event
9
  datasets:
10
- - mozilla-foundation/common_voice_7_0
11
- metrics:
12
- - wer
13
- - cer
14
  model-index:
15
  - name: wav2vec2-large-xlsr-53-punjabi
16
- results:
17
- - task:
18
- type: automatic-speech-recognition # Required. Example: automatic-speech-recognition
19
- name: Speech Recognition # Optional. Example: Speech Recognition
20
- dataset:
21
- type: mozilla-foundation/common_voice_7_0 # Required. Example: common_voice. Use dataset id from https://hf.co/datasets
22
- name: Common Voice pa-IN # Required. Example: Common Voice zh-CN
23
- args: pa-IN # Optional. Example: zh-CN
24
- metrics:
25
- - type: wer # Required. Example: wer
26
- value: 39.42 # Required. Example: 20.90
27
- name: Test WER # Optional. Example: Test WER
28
- args:
29
- - learning_rate: 0.0003
30
- - train_batch_size: 16
31
- - eval_batch_size: 8
32
- - seed: 42
33
- - gradient_accumulation_steps: 2
34
- - total_train_batch_size: 32
35
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
36
- - lr_scheduler_type: linear
37
- - lr_scheduler_warmup_steps: 200
38
- - num_epochs: 30
39
- - mixed_precision_training: Native AMP # Optional. Example for BLEU: max_order
40
- - type: cer # Required. Example: wer
41
- value: 12.99 # Required. Example: 20.90
42
- name: Test CER # Optional. Example: Test WER
43
- args:
44
- - learning_rate: 0.0003
45
- - train_batch_size: 16
46
- - eval_batch_size: 8
47
- - seed: 42
48
- - gradient_accumulation_steps: 2
49
- - total_train_batch_size: 32
50
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
- - lr_scheduler_type: linear
52
- - lr_scheduler_warmup_steps: 200
53
- - num_epochs: 30
54
- - mixed_precision_training: Native AMP # Optional. Example for BLEU: max_order
55
  ---
56
 
57
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -59,11 +14,23 @@ should probably proofread and complete it, then remove this comment. -->
59
 
60
  # wav2vec2-large-xlsr-53-punjabi
61
 
62
- This model is a fine-tuned version of [manandey/wav2vec2-large-xlsr-punjabi](https://huggingface.co/manandey/wav2vec2-large-xlsr-punjabi) on the common_voice dataset.
63
  It achieves the following results on the evaluation set:
64
- - Loss: 0.6752
65
- - Wer: 0.3942
66
- - Cer: 0.1299
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ## Training procedure
69
 
@@ -86,18 +53,19 @@ The following hyperparameters were used during training:
86
 
87
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
88
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
89
- | 0.8899 | 4.16 | 100 | 0.5338 | 0.4233 | 0.1394 |
90
- | 0.3652 | 8.33 | 200 | 0.5759 | 0.4192 | 0.1349 |
91
- | 0.248 | 12.49 | 300 | 0.6309 | 0.4102 | 0.1327 |
92
- | 0.1898 | 16.65 | 400 | 0.6441 | 0.4007 | 0.1351 |
93
- | 0.1486 | 20.82 | 500 | 0.6790 | 0.4044 | 0.1393 |
94
- | 0.1245 | 24.98 | 600 | 0.6869 | 0.3987 | 0.1309 |
95
- | 0.1085 | 29.16 | 700 | 0.6752 | 0.3942 | 0.1299 |
 
96
 
97
 
98
  ### Framework versions
99
 
100
- - Transformers 4.15.0
101
- - Pytorch 1.10.0+cu111
102
- - Datasets 1.17.0
103
- - Tokenizers 0.10.3
 
1
  ---
2
+ license: mit
 
 
 
3
  tags:
4
+ - generated_from_trainer
 
5
  datasets:
6
+ - common_voice
 
 
 
7
  model-index:
8
  - name: wav2vec2-large-xlsr-53-punjabi
9
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
14
 
15
  # wav2vec2-large-xlsr-53-punjabi
16
 
17
+ This model is a fine-tuned version of [Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10](https://huggingface.co/Harveenchadha/vakyansh-wav2vec2-punjabi-pam-10) on the common_voice dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.2101
20
+ - Wer: 0.4939
21
+ - Cer: 0.2238
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
 
35
  ## Training procedure
36
 
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
55
  |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
56
+ | 11.0563 | 3.7 | 100 | 1.9492 | 0.7123 | 0.3872 |
57
+ | 1.6715 | 7.41 | 200 | 1.3142 | 0.6433 | 0.3086 |
58
+ | 0.9117 | 11.11 | 300 | 1.2733 | 0.5657 | 0.2627 |
59
+ | 0.666 | 14.81 | 400 | 1.2730 | 0.5598 | 0.2534 |
60
+ | 0.4225 | 18.52 | 500 | 1.2548 | 0.5300 | 0.2399 |
61
+ | 0.3209 | 22.22 | 600 | 1.2166 | 0.5229 | 0.2372 |
62
+ | 0.2678 | 25.93 | 700 | 1.1795 | 0.5041 | 0.2276 |
63
+ | 0.2088 | 29.63 | 800 | 1.2101 | 0.4939 | 0.2238 |
64
 
65
 
66
  ### Framework versions
67
 
68
+ - Transformers 4.17.0.dev0
69
+ - Pytorch 1.10.2+cu102
70
+ - Datasets 1.18.2.dev0
71
+ - Tokenizers 0.11.0