File size: 5,243 Bytes
b5e81f0
b4121bc
 
 
 
 
 
 
 
 
b5e81f0
 
b4121bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-300m-lg-cv-130hr-v2
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/asr-africa-research-team/ASR%20Africa/runs/kxx501jb)
# wav2vec2-large-xls-r-300m-lg-cv-130hr-v2

This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4316
- Wer: 0.2045
- Cer: 0.0457

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step   | Validation Loss | Wer    | Cer    |
|:-------------:|:-----:|:------:|:---------------:|:------:|:------:|
| 0.7427        | 1.0   | 5194   | 0.2769          | 0.3444 | 0.0749 |
| 0.2071        | 2.0   | 10388  | 0.2579          | 0.2897 | 0.0640 |
| 0.1752        | 3.0   | 15582  | 0.2357          | 0.2781 | 0.0608 |
| 0.1555        | 4.0   | 20776  | 0.2445          | 0.2626 | 0.0581 |
| 0.1437        | 5.0   | 25970  | 0.2393          | 0.2610 | 0.0584 |
| 0.1341        | 6.0   | 31164  | 0.2422          | 0.2572 | 0.0578 |
| 0.1244        | 7.0   | 36358  | 0.2358          | 0.2523 | 0.0562 |
| 0.1148        | 8.0   | 41552  | 0.2342          | 0.2532 | 0.0556 |
| 0.1077        | 9.0   | 46746  | 0.2475          | 0.2462 | 0.0551 |
| 0.0983        | 10.0  | 51940  | 0.2505          | 0.2439 | 0.0549 |
| 0.0895        | 11.0  | 57134  | 0.2556          | 0.2482 | 0.0556 |
| 0.0812        | 12.0  | 62328  | 0.2633          | 0.2492 | 0.0551 |
| 0.0727        | 13.0  | 67522  | 0.2726          | 0.2448 | 0.0546 |
| 0.0676        | 14.0  | 72716  | 0.2694          | 0.2464 | 0.0539 |
| 0.0611        | 15.0  | 77910  | 0.2984          | 0.2423 | 0.0533 |
| 0.0557        | 16.0  | 83104  | 0.2989          | 0.2418 | 0.0534 |
| 0.052         | 17.0  | 88298  | 0.3213          | 0.2402 | 0.0534 |
| 0.0478        | 18.0  | 93492  | 0.3279          | 0.2390 | 0.0532 |
| 0.0451        | 19.0  | 98686  | 0.3247          | 0.2352 | 0.0522 |
| 0.0422        | 20.0  | 103880 | 0.3452          | 0.2344 | 0.0521 |
| 0.0393        | 21.0  | 109074 | 0.3420          | 0.2384 | 0.0529 |
| 0.0378        | 22.0  | 114268 | 0.3429          | 0.2301 | 0.0509 |
| 0.036         | 23.0  | 119462 | 0.3442          | 0.2356 | 0.0520 |
| 0.0337        | 24.0  | 124656 | 0.3563          | 0.2276 | 0.0502 |
| 0.0319        | 25.0  | 129850 | 0.3480          | 0.2262 | 0.0499 |
| 0.0299        | 26.0  | 135044 | 0.3634          | 0.2233 | 0.0491 |
| 0.0281        | 27.0  | 140238 | 0.3593          | 0.2265 | 0.0499 |
| 0.0269        | 28.0  | 145432 | 0.3640          | 0.2242 | 0.0491 |
| 0.0249        | 29.0  | 150626 | 0.3713          | 0.2225 | 0.0491 |
| 0.0246        | 30.0  | 155820 | 0.3849          | 0.2193 | 0.0489 |
| 0.0228        | 31.0  | 161014 | 0.3869          | 0.2199 | 0.0482 |
| 0.0215        | 32.0  | 166208 | 0.3933          | 0.2182 | 0.0483 |
| 0.0205        | 33.0  | 171402 | 0.3920          | 0.2158 | 0.0471 |
| 0.0191        | 34.0  | 176596 | 0.3992          | 0.2166 | 0.0479 |
| 0.0183        | 35.0  | 181790 | 0.3969          | 0.2127 | 0.0467 |
| 0.0176        | 36.0  | 186984 | 0.3998          | 0.2138 | 0.0472 |
| 0.0168        | 37.0  | 192178 | 0.4068          | 0.2107 | 0.0464 |
| 0.016         | 38.0  | 197372 | 0.4216          | 0.2113 | 0.0477 |
| 0.0154        | 39.0  | 202566 | 0.4102          | 0.2102 | 0.0469 |
| 0.0149        | 40.0  | 207760 | 0.4267          | 0.2087 | 0.0465 |
| 0.0142        | 41.0  | 212954 | 0.4248          | 0.2097 | 0.0469 |
| 0.0136        | 42.0  | 218148 | 0.4254          | 0.2074 | 0.0467 |
| 0.0133        | 43.0  | 223342 | 0.4304          | 0.2074 | 0.0465 |
| 0.0131        | 44.0  | 228536 | 0.4314          | 0.2063 | 0.0462 |
| 0.0126        | 45.0  | 233730 | 0.4267          | 0.2047 | 0.0458 |
| 0.0122        | 46.0  | 238924 | 0.4291          | 0.2049 | 0.0457 |
| 0.012         | 47.0  | 244118 | 0.4305          | 0.2048 | 0.0458 |
| 0.0122        | 48.0  | 249312 | 0.4321          | 0.2046 | 0.0457 |
| 0.0121        | 49.0  | 254506 | 0.4316          | 0.2045 | 0.0457 |
| 0.0122        | 50.0  | 259700 | 0.4318          | 0.2045 | 0.0457 |


### Framework versions

- Transformers 4.42.3
- Pytorch 2.2.0+cu121
- Datasets 2.20.0
- Tokenizers 0.19.1