dmusingu commited on
Commit
75e4ae6
1 Parent(s): 34323cb

End of training

Browse files
Files changed (1) hide show
  1. README.md +54 -2
README.md CHANGED
@@ -3,9 +3,26 @@ license: apache-2.0
3
  base_model: facebook/wav2vec2-xls-r-1b
4
  tags:
5
  - generated_from_trainer
 
 
 
 
6
  model-index:
7
  - name: XLS-R-LUGANDA-ASR-CV-14-1B
8
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  ---
10
 
11
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -13,7 +30,11 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # XLS-R-LUGANDA-ASR-CV-14-1B
15
 
16
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on an unknown dataset.
 
 
 
 
17
 
18
  ## Model description
19
 
@@ -44,6 +65,37 @@ The following hyperparameters were used during training:
44
  - training_steps: 10000
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.38.1
 
3
  base_model: facebook/wav2vec2-xls-r-1b
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - common_voice_14_0
8
+ metrics:
9
+ - wer
10
  model-index:
11
  - name: XLS-R-LUGANDA-ASR-CV-14-1B
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: common_voice_14_0
18
+ type: common_voice_14_0
19
+ config: lg
20
+ split: test
21
+ args: lg
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 0.30603965548369283
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
30
 
31
  # XLS-R-LUGANDA-ASR-CV-14-1B
32
 
33
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice_14_0 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: inf
36
+ - Wer: 0.3060
37
+ - Cer: 0.0713
38
 
39
  ## Model description
40
 
 
65
  - training_steps: 10000
66
  - mixed_precision_training: Native AMP
67
 
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
71
+ |:-------------:|:-----:|:-----:|:------:|:---------------:|:------:|
72
+ | 1.5535 | 0.18 | 400 | 0.1685 | inf | 0.6590 |
73
+ | 0.539 | 0.36 | 800 | 0.1516 | inf | 0.5934 |
74
+ | 0.49 | 0.54 | 1200 | 0.1365 | inf | 0.5466 |
75
+ | 0.4569 | 0.72 | 1600 | 0.1364 | inf | 0.5523 |
76
+ | 0.4845 | 0.45 | 2000 | 0.1525 | inf | 0.5907 |
77
+ | 0.4592 | 0.54 | 2400 | 0.1485 | inf | 0.5766 |
78
+ | 0.4447 | 0.63 | 2800 | 0.1397 | inf | 0.5482 |
79
+ | 0.426 | 0.72 | 3200 | 0.1352 | inf | 0.5290 |
80
+ | 0.4454 | 0.81 | 3600 | inf | 0.5330 | 0.1333 |
81
+ | 0.4188 | 0.9 | 4000 | inf | 0.4903 | 0.1240 |
82
+ | 0.4083 | 0.99 | 4400 | inf | 0.4857 | 0.1226 |
83
+ | 0.367 | 1.08 | 4800 | inf | 0.4499 | 0.1114 |
84
+ | 0.3468 | 1.17 | 5200 | inf | 0.4345 | 0.1063 |
85
+ | 0.3401 | 1.27 | 5600 | inf | 0.4130 | 0.1009 |
86
+ | 0.3269 | 1.36 | 6000 | inf | 0.4113 | 0.1004 |
87
+ | 0.3171 | 1.45 | 6400 | inf | 0.3934 | 0.0956 |
88
+ | 0.2996 | 1.54 | 6800 | inf | 0.3803 | 0.0913 |
89
+ | 0.288 | 1.63 | 7200 | inf | 0.3681 | 0.0891 |
90
+ | 0.2812 | 1.72 | 7600 | inf | 0.3573 | 0.0853 |
91
+ | 0.2699 | 1.81 | 8000 | inf | 0.3504 | 0.0835 |
92
+ | 0.2584 | 1.9 | 8400 | inf | 0.3343 | 0.0786 |
93
+ | 0.2424 | 1.99 | 8800 | inf | 0.3232 | 0.0759 |
94
+ | 0.2201 | 2.08 | 9200 | inf | 0.3176 | 0.0740 |
95
+ | 0.2031 | 2.17 | 9600 | inf | 0.3085 | 0.0719 |
96
+ | 0.2007 | 2.26 | 10000 | inf | 0.3060 | 0.0713 |
97
+
98
+
99
  ### Framework versions
100
 
101
  - Transformers 4.38.1