File size: 2,329 Bytes
0bdecd6
8753f99
1224897
8753f99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b3a5e81
8753f99
 
 
 
2292421
0bdecd6
 
8753f99
 
0bdecd6
8753f99
0bdecd6
1224897
8753f99
2292421
 
0bdecd6
8753f99
0bdecd6
82a62e9
b3a5e81
 
0bdecd6
8753f99
0bdecd6
8753f99
 
 
 
 
 
 
 
 
 
2292421
8753f99
0bdecd6
8753f99
0bdecd6
2d2aab7
 
2292421
 
 
 
 
 
 
 
 
0bdecd6
 
8753f99
0bdecd6
8753f99
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
license: apache-2.0
base_model: nutella-toast/wav2vec2-large-xls-r-ssw
tags:
- generated_from_trainer
datasets:
- ml-superb-subset
metrics:
- wer
model-index:
- name: wav2vec2-large-xls-r-ssw
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: ml-superb-subset
      type: ml-superb-subset
      config: ssw
      split: dev
      args: ssw
    metrics:
    - name: Wer
      type: wer
      value: 0.7320872274143302
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-large-xls-r-ssw

This model is a fine-tuned version of [nutella-toast/wav2vec2-large-xls-r-ssw](https://huggingface.co/nutella-toast/wav2vec2-large-xls-r-ssw) on the ml-superb-subset dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7327
- Wer: 0.7321

## Model description

Finetuned version of vanilla wav2vec2-large-xls-r for siSwati. For CS224S at Stanford University.

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer    |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 0.5779        | 1.0471 | 100  | 0.7902          | 0.8785 |
| 0.5307        | 2.0942 | 200  | 0.8185          | 0.8660 |
| 0.4826        | 3.1414 | 300  | 0.8378          | 0.8692 |
| 0.4529        | 4.1885 | 400  | 0.8048          | 0.9097 |
| 0.5053        | 5.2356 | 500  | 0.9541          | 0.8910 |
| 0.4149        | 6.2827 | 600  | 0.7687          | 0.7913 |
| 0.3179        | 7.3298 | 700  | 0.7678          | 0.7850 |
| 0.2642        | 8.3770 | 800  | 0.7151          | 0.7321 |
| 0.2147        | 9.4241 | 900  | 0.7327          | 0.7321 |


### Framework versions

- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1