kylzer commited on
Commit
6d67ac8
1 Parent(s): bfcc7af

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -16,12 +16,12 @@ model-index:
16
  name: common_voice
17
  type: common_voice
18
  config: id
19
- split: train+validation
20
  args: id
21
  metrics:
22
  - name: Wer
23
  type: wer
24
- value: 0.22400210084033614
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.3589
35
- - Wer: 0.2240
36
 
37
  ## Model description
38
 
@@ -67,19 +67,19 @@ The following hyperparameters were used during training:
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:------:|
70
- | 2.3848 | 3.64 | 400 | 0.7048 | 0.6599 |
71
- | 0.5612 | 7.27 | 800 | 0.4098 | 0.3711 |
72
- | 0.3146 | 10.91 | 1200 | 0.4011 | 0.3258 |
73
- | 0.225 | 14.55 | 1600 | 0.3816 | 0.2799 |
74
- | 0.1787 | 18.18 | 2000 | 0.3890 | 0.2673 |
75
- | 0.1473 | 21.82 | 2400 | 0.3614 | 0.2466 |
76
- | 0.1214 | 25.45 | 2800 | 0.3590 | 0.2388 |
77
- | 0.1057 | 29.09 | 3200 | 0.3589 | 0.2240 |
78
 
79
 
80
  ### Framework versions
81
 
82
- - Transformers 4.25.1
83
- - Pytorch 1.13.0+cu116
84
- - Datasets 2.7.1
85
  - Tokenizers 0.13.2
 
16
  name: common_voice
17
  type: common_voice
18
  config: id
19
+ split: test
20
  args: id
21
  metrics:
22
  - name: Wer
23
  type: wer
24
+ value: 0.36856617647058826
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.3839
35
+ - Wer: 0.3686
36
 
37
  ## Model description
38
 
 
67
 
68
  | Training Loss | Epoch | Step | Validation Loss | Wer |
69
  |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | 4.4354 | 3.64 | 400 | 1.9595 | 1.0 |
71
+ | 0.7227 | 7.27 | 800 | 0.4532 | 0.5039 |
72
+ | 0.3293 | 10.91 | 1200 | 0.4277 | 0.4425 |
73
+ | 0.2298 | 14.55 | 1600 | 0.3947 | 0.4182 |
74
+ | 0.1789 | 18.18 | 2000 | 0.3960 | 0.4009 |
75
+ | 0.1496 | 21.82 | 2400 | 0.3793 | 0.3848 |
76
+ | 0.122 | 25.45 | 2800 | 0.3794 | 0.3795 |
77
+ | 0.1056 | 29.09 | 3200 | 0.3839 | 0.3686 |
78
 
79
 
80
  ### Framework versions
81
 
82
+ - Transformers 4.26.1
83
+ - Pytorch 1.13.1+cu116
84
+ - Datasets 2.10.0
85
  - Tokenizers 0.13.2