infinitejoy commited on
Commit
b67dd98
1 Parent(s): f110a79

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ba
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_7_0
8
+ - generated_from_trainer
9
+ datasets:
10
+ - common_voice
11
+ model-index:
12
+ - name: wav2vec2-large-xls-r-300m-bashkir
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # wav2vec2-large-xls-r-300m-bashkir
20
+
21
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - BA dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.1892
24
+ - Wer: 0.2421
25
+
26
+ ## Model description
27
+
28
+ More information needed
29
+
30
+ ## Intended uses & limitations
31
+
32
+ More information needed
33
+
34
+ ## Training and evaluation data
35
+
36
+ More information needed
37
+
38
+ ## Training procedure
39
+
40
+ ### Training hyperparameters
41
+
42
+ The following hyperparameters were used during training:
43
+ - learning_rate: 0.0003
44
+ - train_batch_size: 32
45
+ - eval_batch_size: 32
46
+ - seed: 42
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
+ - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_steps: 2000
50
+ - num_epochs: 10.0
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
57
+ | 1.4792 | 0.5 | 2000 | 0.4598 | 0.5404 |
58
+ | 1.449 | 1.0 | 4000 | 0.4650 | 0.5610 |
59
+ | 1.3742 | 1.49 | 6000 | 0.4001 | 0.4977 |
60
+ | 1.3375 | 1.99 | 8000 | 0.3916 | 0.4894 |
61
+ | 1.2961 | 2.49 | 10000 | 0.3641 | 0.4569 |
62
+ | 1.2714 | 2.99 | 12000 | 0.3491 | 0.4488 |
63
+ | 1.2399 | 3.48 | 14000 | 0.3151 | 0.3986 |
64
+ | 1.2067 | 3.98 | 16000 | 0.3081 | 0.3923 |
65
+ | 1.1842 | 4.48 | 18000 | 0.2875 | 0.3703 |
66
+ | 1.1644 | 4.98 | 20000 | 0.2840 | 0.3670 |
67
+ | 1.161 | 5.48 | 22000 | 0.2790 | 0.3597 |
68
+ | 1.1303 | 5.97 | 24000 | 0.2552 | 0.3272 |
69
+ | 1.0874 | 6.47 | 26000 | 0.2405 | 0.3142 |
70
+ | 1.0613 | 6.97 | 28000 | 0.2352 | 0.3055 |
71
+ | 1.0498 | 7.47 | 30000 | 0.2249 | 0.2910 |
72
+ | 1.021 | 7.96 | 32000 | 0.2118 | 0.2752 |
73
+ | 1.0002 | 8.46 | 34000 | 0.2046 | 0.2662 |
74
+ | 0.9762 | 8.96 | 36000 | 0.1969 | 0.2530 |
75
+ | 0.9568 | 9.46 | 38000 | 0.1917 | 0.2449 |
76
+ | 0.953 | 9.96 | 40000 | 0.1893 | 0.2425 |
77
+
78
+
79
+ ### Framework versions
80
+
81
+ - Transformers 4.16.0.dev0
82
+ - Pytorch 1.10.1+cu102
83
+ - Datasets 1.17.1.dev0
84
+ - Tokenizers 0.11.0