franziskaM commited on
Commit
77da528
1 Parent(s): e545385

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - common_voice_13_0
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: b21-wav2vec2-large-xls-r-romansh-colab
11
+ results:
12
+ - task:
13
+ name: Automatic Speech Recognition
14
+ type: automatic-speech-recognition
15
+ dataset:
16
+ name: common_voice_13_0
17
+ type: common_voice_13_0
18
+ config: rm-vallader
19
+ split: test
20
+ args: rm-vallader
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.6304145319049836
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # b21-wav2vec2-large-xls-r-romansh-colab
31
+
32
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_13_0 dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 0.8091
35
+ - Wer: 0.6304
36
+
37
+ ## Model description
38
+
39
+ More information needed
40
+
41
+ ## Intended uses & limitations
42
+
43
+ More information needed
44
+
45
+ ## Training and evaluation data
46
+
47
+ More information needed
48
+
49
+ ## Training procedure
50
+
51
+ ### Training hyperparameters
52
+
53
+ The following hyperparameters were used during training:
54
+ - learning_rate: 0.0004
55
+ - train_batch_size: 4
56
+ - eval_batch_size: 8
57
+ - seed: 42
58
+ - gradient_accumulation_steps: 2
59
+ - total_train_batch_size: 8
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - lr_scheduler_warmup_steps: 100
63
+ - num_epochs: 30
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
69
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
70
+ | 6.5829 | 0.76 | 100 | 2.9564 | 1.0 |
71
+ | 2.9568 | 1.52 | 200 | 3.0768 | 1.0 |
72
+ | 2.9578 | 2.29 | 300 | 3.0654 | 1.0 |
73
+ | 2.957 | 3.05 | 400 | 2.9377 | 1.0 |
74
+ | 2.9419 | 3.81 | 500 | 2.9408 | 1.0 |
75
+ | 2.9567 | 4.58 | 600 | 2.9395 | 1.0 |
76
+ | 2.9625 | 5.34 | 700 | 2.9388 | 1.0 |
77
+ | 2.9395 | 6.11 | 800 | 2.9374 | 1.0 |
78
+ | 2.9285 | 6.87 | 900 | 2.9240 | 1.0 |
79
+ | 2.9187 | 7.63 | 1000 | 2.9057 | 1.0 |
80
+ | 2.9251 | 8.4 | 1100 | 2.8985 | 1.0 |
81
+ | 2.9033 | 9.16 | 1200 | 2.8942 | 1.0 |
82
+ | 2.8877 | 9.92 | 1300 | 2.8917 | 1.0 |
83
+ | 2.8586 | 10.68 | 1400 | 2.7719 | 1.0 |
84
+ | 2.5777 | 11.45 | 1500 | 2.2424 | 1.0 |
85
+ | 1.9243 | 12.21 | 1600 | 1.7068 | 0.9772 |
86
+ | 1.4534 | 12.97 | 1700 | 1.2780 | 0.9585 |
87
+ | 1.1793 | 13.74 | 1800 | 1.1482 | 0.9360 |
88
+ | 1.0026 | 14.5 | 1900 | 1.0673 | 0.8852 |
89
+ | 0.8879 | 15.27 | 2000 | 0.9651 | 0.8433 |
90
+ | 0.7933 | 16.03 | 2100 | 0.8973 | 0.8216 |
91
+ | 0.6895 | 16.79 | 2200 | 0.8396 | 0.8034 |
92
+ | 0.6531 | 17.56 | 2300 | 0.8131 | 0.7713 |
93
+ | 0.5753 | 18.32 | 2400 | 0.8388 | 0.7531 |
94
+ | 0.5621 | 19.08 | 2500 | 0.7844 | 0.7632 |
95
+ | 0.5076 | 19.84 | 2600 | 0.7629 | 0.7485 |
96
+ | 0.4672 | 20.61 | 2700 | 0.7777 | 0.7497 |
97
+ | 0.443 | 21.37 | 2800 | 0.8001 | 0.7292 |
98
+ | 0.4129 | 22.14 | 2900 | 0.7902 | 0.7094 |
99
+ | 0.3767 | 22.9 | 3000 | 0.7569 | 0.6784 |
100
+ | 0.357 | 23.66 | 3100 | 0.7726 | 0.6903 |
101
+ | 0.3378 | 24.43 | 3200 | 0.8016 | 0.6882 |
102
+ | 0.3199 | 25.19 | 3300 | 0.7854 | 0.6677 |
103
+ | 0.3144 | 25.95 | 3400 | 0.7792 | 0.6509 |
104
+ | 0.3025 | 26.71 | 3500 | 0.8157 | 0.6695 |
105
+ | 0.2919 | 27.48 | 3600 | 0.8215 | 0.6633 |
106
+ | 0.2762 | 28.24 | 3700 | 0.8167 | 0.6500 |
107
+ | 0.2679 | 29.01 | 3800 | 0.8144 | 0.6311 |
108
+ | 0.2671 | 29.77 | 3900 | 0.8091 | 0.6304 |
109
+
110
+
111
+ ### Framework versions
112
+
113
+ - Transformers 4.26.0
114
+ - Pytorch 2.0.1+cu118
115
+ - Datasets 2.14.4
116
+ - Tokenizers 0.13.3