bayartsogt commited on
Commit
276d773
1 Parent(s): 4174a12

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - bayartsogt/mongolian_speech_commands
7
+ model-index:
8
+ - name: wav2vec2-base-mn-pretrain-42h-finetuned-speech-commands
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # wav2vec2-base-mn-pretrain-42h-finetuned-speech-commands
16
+
17
+ This model is a fine-tuned version of [bayartsogt/wav2vec2-base-mn-pretrain-42h](https://huggingface.co/bayartsogt/wav2vec2-base-mn-pretrain-42h) on the Mongolian Speech Commands dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - eval_loss: 0.5607
20
+ - eval_mn_acc: 0.9830
21
+ - eval_mn_f1: 0.9857
22
+ - eval_en_acc: 0.8914
23
+ - eval_en_f1: 0.8671
24
+ - eval_runtime: 109.6829
25
+ - eval_samples_per_second: 46.188
26
+ - eval_steps_per_second: 0.365
27
+ - epoch: 6.41
28
+ - step: 4352
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 5e-05
48
+ - train_batch_size: 128
49
+ - eval_batch_size: 128
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 8
55
+
56
+ ### Framework versions
57
+
58
+ - Transformers 4.30.2
59
+ - Pytorch 2.0.0
60
+ - Datasets 2.14.4
61
+ - Tokenizers 0.13.3