anton-l HF staff commited on
Commit
3bd249f
1 Parent(s): b16e4ae

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - tr
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - common_voice
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: wav2vec2-xls-r-common_voice-tr-ft
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # wav2vec2-xls-r-common_voice-tr-ft
18
+
19
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.5806
22
+ - Wer: 0.3998
23
+ - Cer: 0.1053
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0005
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 4
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 64
50
+ - total_eval_batch_size: 32
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 500
54
+ - training_steps: 5000
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
60
+ |:-------------:|:------:|:----:|:---------------:|:------:|:------:|
61
+ | 0.5369 | 17.0 | 500 | 0.6021 | 0.6366 | 0.1727 |
62
+ | 0.3542 | 34.0 | 1000 | 0.5265 | 0.4906 | 0.1278 |
63
+ | 0.1866 | 51.0 | 1500 | 0.5805 | 0.4768 | 0.1261 |
64
+ | 0.1674 | 68.01 | 2000 | 0.5336 | 0.4518 | 0.1186 |
65
+ | 0.19 | 86.0 | 2500 | 0.5676 | 0.4427 | 0.1151 |
66
+ | 0.0815 | 103.0 | 3000 | 0.5510 | 0.4268 | 0.1125 |
67
+ | 0.0545 | 120.0 | 3500 | 0.5608 | 0.4175 | 0.1099 |
68
+ | 0.0299 | 137.01 | 4000 | 0.5875 | 0.4222 | 0.1124 |
69
+ | 0.0267 | 155.0 | 4500 | 0.5882 | 0.4026 | 0.1063 |
70
+ | 0.025 | 172.0 | 5000 | 0.5806 | 0.3998 | 0.1053 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.17.0.dev0
76
+ - Pytorch 1.10.2
77
+ - Datasets 1.18.2
78
+ - Tokenizers 0.10.3