PereLluis13 commited on
Commit
fa16ca3
1 Parent(s): e2a01cb

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -66
README.md CHANGED
@@ -8,70 +8,9 @@ tags:
8
  - collectivat/tv3_parla
9
  - projecte-aina/parlament_parla
10
  - generated_from_trainer
11
- - robust-speech-event
12
- datasets:
13
- - mozilla-foundation/common_voice_8_0
14
- - collectivat/tv3_parla
15
- - projecte-aina/parlament_parla
16
  model-index:
17
  - name: wav2vec2-xls-r-300m-ca
18
- results:
19
- - task:
20
- name: Speech Recognition
21
- type: automatic-speech-recognition
22
- dataset:
23
- name: mozilla-foundation/common_voice_8_0 ca
24
- type: mozilla-foundation/common_voice_8_0
25
- args: ca
26
- metrics:
27
- - name: Test WER
28
- type: wer
29
- value: 0.1522665117742443
30
- - name: Test CER
31
- type: cer
32
- value: 0.04078709154868726
33
- - task:
34
- name: Speech Recognition
35
- type: automatic-speech-recognition
36
- dataset:
37
- name: projecte-aina/parlament_parla ca
38
- type: projecte-aina/parlament_parla
39
- args: clean
40
- metrics:
41
- - name: Test WER
42
- type: wer
43
- value: 0.06541946111307212
44
- - name: Test CER
45
- type: cer
46
- value: 0.02205785796827398
47
- - task:
48
- name: Speech Recognition
49
- type: automatic-speech-recognition
50
- dataset:
51
- name: collectivat/tv3_parla ca
52
- type: collectivat/tv3_parla
53
- args: ca
54
- metrics:
55
- - name: Test WER
56
- type: wer
57
- value: 0.24485121453593564
58
- - name: Test CER
59
- type: cer
60
- value: 0.10753510718204506
61
- - task:
62
- name: Speech Recognition
63
- type: automatic-speech-recognition
64
- dataset:
65
- name: Robust Speech Event - Catalan Dev Data
66
- type: speech-recognition-community-v2/dev_data
67
- args: ca
68
- metrics:
69
- - name: Test WER
70
- type: wer
71
- value: 0.3325532856871798
72
- - name: Test CER
73
- type: cer
74
- value: 0.15916561314791403
75
  ---
76
 
77
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -81,8 +20,8 @@ should probably proofread and complete it, then remove this comment. -->
81
 
82
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA dataset.
83
  It achieves the following results on the evaluation set:
84
- - Loss: 0.2549
85
- - Wer: 0.1573
86
 
87
  ## Model description
88
 
@@ -110,7 +49,7 @@ The following hyperparameters were used during training:
110
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
111
  - lr_scheduler_type: linear
112
  - lr_scheduler_warmup_steps: 2000
113
- - num_epochs: 12.0
114
  - mixed_precision_training: Native AMP
115
 
116
  ### Training results
@@ -162,11 +101,28 @@ The following hyperparameters were used during training:
162
  | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
163
  | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
164
  | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
165
 
166
 
167
  ### Framework versions
168
 
169
  - Transformers 4.16.0.dev0
170
  - Pytorch 1.10.1+cu102
171
- - Datasets 1.18.1
172
  - Tokenizers 0.11.0
 
8
  - collectivat/tv3_parla
9
  - projecte-aina/parlament_parla
10
  - generated_from_trainer
 
 
 
 
 
11
  model-index:
12
  - name: wav2vec2-xls-r-300m-ca
13
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
20
 
21
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - CA dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 0.2472
24
+ - Wer: 0.1499
25
 
26
  ## Model description
27
 
 
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_steps: 2000
52
+ - num_epochs: 18.0
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
 
101
  | 1.0805 | 11.45 | 21500 | 0.2561 | 0.1524 |
102
  | 1.0722 | 11.72 | 22000 | 0.2540 | 0.1566 |
103
  | 1.0763 | 11.99 | 22500 | 0.2549 | 0.1572 |
104
+ | 1.0835 | 12.25 | 23000 | 0.2586 | 0.1521 |
105
+ | 1.0883 | 12.52 | 23500 | 0.2583 | 0.1519 |
106
+ | 1.0888 | 12.79 | 24000 | 0.2551 | 0.1582 |
107
+ | 1.0933 | 13.05 | 24500 | 0.2628 | 0.1537 |
108
+ | 1.0799 | 13.32 | 25000 | 0.2600 | 0.1508 |
109
+ | 1.0804 | 13.59 | 25500 | 0.2620 | 0.1475 |
110
+ | 1.0814 | 13.85 | 26000 | 0.2537 | 0.1517 |
111
+ | 1.0693 | 14.12 | 26500 | 0.2560 | 0.1542 |
112
+ | 1.0724 | 14.38 | 27000 | 0.2540 | 0.1574 |
113
+ | 1.0704 | 14.65 | 27500 | 0.2548 | 0.1626 |
114
+ | 1.0729 | 14.92 | 28000 | 0.2548 | 0.1601 |
115
+ | 1.0724 | 15.18 | 28500 | 0.2511 | 0.1512 |
116
+ | 1.0655 | 15.45 | 29000 | 0.2498 | 0.1490 |
117
+ | 1.0608 | 15.98 | 30000 | 0.2487 | 0.1481 |
118
+ | 1.0541 | 16.52 | 31000 | 0.2468 | 0.1504 |
119
+ | 1.0584 | 17.05 | 32000 | 0.2467 | 0.1493 |
120
+ | 1.0507 | 17.58 | 33000 | 0.2481 | 0.1517 |
121
 
122
 
123
  ### Framework versions
124
 
125
  - Transformers 4.16.0.dev0
126
  - Pytorch 1.10.1+cu102
127
+ - Datasets 1.18.3
128
  - Tokenizers 0.11.0