adrianSauer commited on
Commit
78a69b0
1 Parent(s): f9730ed

End of training

Browse files
Files changed (2) hide show
  1. README.md +91 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,91 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - gn
5
+ license: apache-2.0
6
+ base_model: glob-asr/wav2vec2-large-xls-r-300m-guarani-small
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - mozilla-foundation/common_voice_16_1
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Common Voice 16
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: Common Voice 16
21
+ type: mozilla-foundation/common_voice_16_1
22
+ config: gn
23
+ split: None
24
+ args: gn
25
+ metrics:
26
+ - name: Wer
27
+ type: wer
28
+ value: 49.7001998667555
29
+ ---
30
+
31
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
+ should probably proofread and complete it, then remove this comment. -->
33
+
34
+ # Common Voice 16
35
+
36
+ This model is a fine-tuned version of [glob-asr/wav2vec2-large-xls-r-300m-guarani-small](https://huggingface.co/glob-asr/wav2vec2-large-xls-r-300m-guarani-small) on the Common Voice 16 dataset.
37
+ It achieves the following results on the evaluation set:
38
+ - Loss: 0.4335
39
+ - Wer: 49.7002
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 5e-05
59
+ - train_batch_size: 8
60
+ - eval_batch_size: 16
61
+ - seed: 42
62
+ - gradient_accumulation_steps: 2
63
+ - total_train_batch_size: 16
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: constant_with_warmup
66
+ - lr_scheduler_warmup_steps: 3000
67
+ - training_steps: 5000
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Training results
71
+
72
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
73
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
74
+ | 1.258 | 0.4955 | 500 | 0.3710 | 53.1646 |
75
+ | 0.921 | 0.9911 | 1000 | 0.3282 | 49.2338 |
76
+ | 0.7458 | 1.4866 | 1500 | 0.2940 | 46.7022 |
77
+ | 0.6763 | 1.9822 | 2000 | 0.2628 | 44.9700 |
78
+ | 0.568 | 2.4777 | 2500 | 0.2616 | 43.3711 |
79
+ | 0.5414 | 2.9732 | 3000 | 0.2504 | 39.8401 |
80
+ | 0.484 | 3.4688 | 3500 | 0.2462 | 41.0393 |
81
+ | 0.5281 | 3.9643 | 4000 | 0.3584 | 43.5043 |
82
+ | 0.5756 | 4.4599 | 4500 | 0.4220 | 44.3038 |
83
+ | 0.721 | 4.9554 | 5000 | 0.4335 | 49.7002 |
84
+
85
+
86
+ ### Framework versions
87
+
88
+ - Transformers 4.44.1
89
+ - Pytorch 2.3.1+cu121
90
+ - Datasets 2.21.0
91
+ - Tokenizers 0.19.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:22c052bfc40cee8607c51dece78518d4e1757282772454dcbf778f0c72979998
3
  size 1261996080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fff637bb095ec520410f5a280bff3532fbf8938f995bfae2624e5f07bfcd3159
3
  size 1261996080