File size: 2,171 Bytes
ad5b895
48ae1c1
16b90dc
48ae1c1
 
 
 
 
 
 
 
 
 
f40bc95
16b90dc
48ae1c1
 
 
 
 
 
 
16b90dc
 
48ae1c1
ad5b895
 
48ae1c1
 
ad5b895
48ae1c1
ad5b895
e49629b
48ae1c1
16b90dc
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
 
 
 
 
909c7c6
 
48ae1c1
 
16b90dc
909c7c6
48ae1c1
ad5b895
48ae1c1
ad5b895
48ae1c1
 
16b90dc
 
 
 
 
 
ad5b895
 
48ae1c1
ad5b895
e49629b
48ae1c1
e49629b
48ae1c1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- wer
model-index:
- name: wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod2
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: xtreme_s
      type: xtreme_s
      config: fleurs.id_id
      split: test
      args: fleurs.id_id
    metrics:
    - name: Wer
      type: wer
      value: 1.0
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec2-XLS-R-Fleurs-demo-google-colab-Ezra_William_Prod2

This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the xtreme_s dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9422
- Wer: 1.0

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.005
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 600
- num_epochs: 60
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 4.1308        | 9.23  | 300  | 2.8692          | 1.0 |
| 2.8893        | 18.46 | 600  | 2.8467          | 1.0 |
| 2.8682        | 27.69 | 900  | 2.8660          | 1.0 |
| 2.84          | 36.92 | 1200 | 2.7426          | 1.0 |
| 2.5025        | 46.15 | 1500 | 2.1426          | 1.0 |
| 2.1729        | 55.38 | 1800 | 1.9422          | 1.0 |


### Framework versions

- Transformers 4.37.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1