anton-l HF staff commited on
Commit
c6ff2d7
1 Parent(s): 12d5d32

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - tr
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - common_voice
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: wav2vec2-xls-r-common_voice-tr-ft-stream
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # wav2vec2-xls-r-common_voice-tr-ft-stream
18
+
19
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the COMMON_VOICE - TR dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.3519
22
+ - Wer: 0.2927
23
+ - Cer: 0.0694
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0005
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 4
48
+ - gradient_accumulation_steps: 2
49
+ - total_train_batch_size: 64
50
+ - total_eval_batch_size: 32
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 500
54
+ - training_steps: 5000
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
60
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
61
+ | 0.6768 | 9.01 | 500 | 0.4220 | 0.5143 | 0.1235 |
62
+ | 0.3801 | 19.01 | 1000 | 0.3303 | 0.4403 | 0.1055 |
63
+ | 0.3616 | 29.0 | 1500 | 0.3540 | 0.3716 | 0.0878 |
64
+ | 0.2334 | 39.0 | 2000 | 0.3666 | 0.3671 | 0.0842 |
65
+ | 0.3141 | 49.0 | 2500 | 0.3407 | 0.3373 | 0.0819 |
66
+ | 0.1926 | 58.01 | 3000 | 0.3886 | 0.3520 | 0.0867 |
67
+ | 0.1372 | 68.01 | 3500 | 0.3415 | 0.3189 | 0.0743 |
68
+ | 0.091 | 78.0 | 4000 | 0.3750 | 0.3164 | 0.0757 |
69
+ | 0.0893 | 88.0 | 4500 | 0.3559 | 0.2968 | 0.0712 |
70
+ | 0.095 | 98.0 | 5000 | 0.3519 | 0.2927 | 0.0694 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.16.0.dev0
76
+ - Pytorch 1.10.2
77
+ - Datasets 1.18.2
78
+ - Tokenizers 0.10.3