franziskaM commited on
Commit
05c4690
1 Parent(s): 2041977

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - common_voice_13_0
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: b20-wav2vec2-large-xls-r-romansh-colab
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice_13_0
17
+ type: common_voice_13_0
18
+ config: rm-vallader
19
+ split: test
20
+ args: rm-vallader
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.31811830461108526
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # b20-wav2vec2-large-xls-r-romansh-colab
31
+
32
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.3738
35
+ - Wer: 0.3181
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 0.0001
55
+ - train_batch_size: 4
56
+ - eval_batch_size: 8
57
+ - seed: 42
58
+ - gradient_accumulation_steps: 2
59
+ - total_train_batch_size: 8
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 100
63
+ - num_epochs: 30
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | 9.6653 | 0.76 | 100 | 3.2423 | 1.0 |
71
+ | 3.0224 | 1.52 | 200 | 3.0321 | 1.0 |
72
+ | 2.969 | 2.29 | 300 | 3.0174 | 1.0 |
73
+ | 2.964 | 3.05 | 400 | 2.9531 | 1.0 |
74
+ | 2.9488 | 3.81 | 500 | 2.9441 | 1.0 |
75
+ | 2.962 | 4.58 | 600 | 2.9383 | 1.0 |
76
+ | 2.9646 | 5.34 | 700 | 2.9377 | 1.0 |
77
+ | 2.9411 | 6.11 | 800 | 2.9303 | 1.0 |
78
+ | 2.9313 | 6.87 | 900 | 2.9264 | 1.0 |
79
+ | 2.9327 | 7.63 | 1000 | 2.9211 | 1.0 |
80
+ | 2.9574 | 8.4 | 1100 | 2.9145 | 1.0 |
81
+ | 2.9227 | 9.16 | 1200 | 2.9034 | 1.0 |
82
+ | 2.8916 | 9.92 | 1300 | 2.8764 | 1.0 |
83
+ | 2.8311 | 10.68 | 1400 | 2.5611 | 0.9995 |
84
+ | 2.0497 | 11.45 | 1500 | 1.1256 | 0.8784 |
85
+ | 1.2359 | 12.21 | 1600 | 0.7668 | 0.7143 |
86
+ | 0.9607 | 12.97 | 1700 | 0.6340 | 0.6388 |
87
+ | 0.804 | 13.74 | 1800 | 0.5658 | 0.5806 |
88
+ | 0.693 | 14.5 | 1900 | 0.5147 | 0.5389 |
89
+ | 0.6403 | 15.27 | 2000 | 0.4711 | 0.4797 |
90
+ | 0.5716 | 16.03 | 2100 | 0.4298 | 0.4520 |
91
+ | 0.5124 | 16.79 | 2200 | 0.4353 | 0.4313 |
92
+ | 0.5104 | 17.56 | 2300 | 0.3991 | 0.3952 |
93
+ | 0.4416 | 18.32 | 2400 | 0.4012 | 0.3933 |
94
+ | 0.4419 | 19.08 | 2500 | 0.3945 | 0.3687 |
95
+ | 0.406 | 19.84 | 2600 | 0.4003 | 0.3675 |
96
+ | 0.3946 | 20.61 | 2700 | 0.3901 | 0.3579 |
97
+ | 0.379 | 21.37 | 2800 | 0.3963 | 0.3537 |
98
+ | 0.3663 | 22.14 | 2900 | 0.3826 | 0.3435 |
99
+ | 0.3425 | 22.9 | 3000 | 0.3850 | 0.3435 |
100
+ | 0.3396 | 23.66 | 3100 | 0.3852 | 0.3405 |
101
+ | 0.3041 | 24.43 | 3200 | 0.3771 | 0.3265 |
102
+ | 0.3194 | 25.19 | 3300 | 0.3796 | 0.3265 |
103
+ | 0.312 | 25.95 | 3400 | 0.3734 | 0.3228 |
104
+ | 0.313 | 26.71 | 3500 | 0.3864 | 0.3270 |
105
+ | 0.3039 | 27.48 | 3600 | 0.3734 | 0.3149 |
106
+ | 0.2929 | 28.24 | 3700 | 0.3785 | 0.3223 |
107
+ | 0.2884 | 29.01 | 3800 | 0.3734 | 0.3160 |
108
+ | 0.2812 | 29.77 | 3900 | 0.3738 | 0.3181 |
109
+
110
+
111
+ ### Framework versions
112
+
113
+ - Transformers 4.26.0
114
+ - Pytorch 2.0.1+cu118
115
+ - Datasets 2.14.4
116
+ - Tokenizers 0.13.3