vumichien commited on
Commit
164dc80
1 Parent(s): 72e7a11

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -0
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ datasets:
5
+ - superb
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: trillsson3-ft-keyword-spotting-14
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # trillsson3-ft-keyword-spotting-14
17
+
18
+ This model is a fine-tuned version of [vumichien/nonsemantic-speech-trillsson3](https://huggingface.co/vumichien/nonsemantic-speech-trillsson3) on the superb dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.3072
21
+ - Accuracy: 0.9089
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 0.0003
41
+ - train_batch_size: 16
42
+ - eval_batch_size: 64
43
+ - seed: 0
44
+ - gradient_accumulation_steps: 2
45
+ - total_train_batch_size: 32
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 20.0
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:-----:|:---------------:|:--------:|
56
+ | 1.2824 | 1.0 | 1597 | 0.7818 | 0.6892 |
57
+ | 0.8003 | 2.0 | 3194 | 0.4443 | 0.8735 |
58
+ | 0.7232 | 3.0 | 4791 | 0.3728 | 0.8833 |
59
+ | 0.73 | 4.0 | 6388 | 0.3465 | 0.8973 |
60
+ | 0.7015 | 5.0 | 7985 | 0.3211 | 0.9109 |
61
+ | 0.6981 | 6.0 | 9582 | 0.3200 | 0.9081 |
62
+ | 0.6807 | 7.0 | 11179 | 0.3209 | 0.9059 |
63
+ | 0.6873 | 8.0 | 12776 | 0.3206 | 0.9022 |
64
+ | 0.6416 | 9.0 | 14373 | 0.3124 | 0.9057 |
65
+ | 0.6698 | 10.0 | 15970 | 0.3288 | 0.8950 |
66
+ | 0.716 | 11.0 | 17567 | 0.3147 | 0.8998 |
67
+ | 0.6514 | 12.0 | 19164 | 0.3034 | 0.9112 |
68
+ | 0.6513 | 13.0 | 20761 | 0.3091 | 0.9092 |
69
+ | 0.652 | 14.0 | 22358 | 0.3056 | 0.9100 |
70
+ | 0.7105 | 15.0 | 23955 | 0.3015 | 0.9150 |
71
+ | 0.6337 | 16.0 | 25552 | 0.3070 | 0.9091 |
72
+ | 0.63 | 17.0 | 27149 | 0.3018 | 0.9135 |
73
+ | 0.6672 | 18.0 | 28746 | 0.3084 | 0.9088 |
74
+ | 0.6479 | 19.0 | 30343 | 0.3060 | 0.9101 |
75
+ | 0.6658 | 20.0 | 31940 | 0.3072 | 0.9089 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.23.0.dev0
81
+ - Pytorch 1.12.1+cu113
82
+ - Datasets 2.6.1
83
+ - Tokenizers 0.13.1