File size: 2,223 Bytes
d2c789d
be22dcd
e9569c5
be22dcd
 
 
 
 
 
 
 
 
 
492ad7f
e9569c5
be22dcd
 
 
 
 
 
 
e9569c5
 
 
d2c789d
 
be22dcd
 
d2c789d
be22dcd
d2c789d
be22dcd
 
e9569c5
 
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
d2c789d
be22dcd
 
 
 
 
 
 
 
 
 
e9569c5
be22dcd
d2c789d
be22dcd
d2c789d
e9569c5
 
 
 
 
 
 
 
d2c789d
 
be22dcd
d2c789d
be22dcd
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- wer
model-index:
- name: wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod6
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: xtreme_s
      type: xtreme_s
      config: fleurs.id_id
      split: test
      args: fleurs.id_id
    metrics:
    - name: Wer
      type: wer
      value: 0.4653494985780572
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod6

This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9273
- Wer: 0.4653

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 120
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer    |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 4.8946        | 18.18  | 300  | 2.8697          | 1.0    |
| 1.0963        | 36.36  | 600  | 0.8022          | 0.5904 |
| 0.1715        | 54.55  | 900  | 0.9387          | 0.5567 |
| 0.0953        | 72.73  | 1200 | 0.9120          | 0.5082 |
| 0.0651        | 90.91  | 1500 | 0.9362          | 0.4854 |
| 0.044         | 109.09 | 1800 | 0.9273          | 0.4653 |


### Framework versions

- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1