BanUrsus commited on
Commit
1183d59
1 Parent(s): 1604006

End of training

Browse files
Files changed (2) hide show
  1. README.md +93 -0
  2. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: ntu-spml/distilhubert
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - accuracy
8
+ model-index:
9
+ - name: distilhubert-finetuned-gtzan-v2
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # distilhubert-finetuned-gtzan-v2
17
+
18
+ This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.5528
21
+ - Accuracy: 0.86
22
+
23
+ ## Model description
24
+
25
+ More information needed
26
+
27
+ ## Intended uses & limitations
28
+
29
+ More information needed
30
+
31
+ ## Training and evaluation data
32
+
33
+ More information needed
34
+
35
+ ## Training procedure
36
+
37
+ ### Training hyperparameters
38
+
39
+ The following hyperparameters were used during training:
40
+ - learning_rate: 5e-05
41
+ - train_batch_size: 4
42
+ - eval_batch_size: 4
43
+ - seed: 42
44
+ - gradient_accumulation_steps: 16
45
+ - total_train_batch_size: 64
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_ratio: 0.1
49
+ - num_epochs: 30
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
+ | 2.2975 | 1.0 | 14 | 2.2790 | 0.26 |
57
+ | 2.255 | 1.99 | 28 | 2.1863 | 0.39 |
58
+ | 2.0948 | 2.99 | 42 | 1.9637 | 0.43 |
59
+ | 1.847 | 3.98 | 56 | 1.7093 | 0.54 |
60
+ | 1.5798 | 4.98 | 70 | 1.5095 | 0.62 |
61
+ | 1.4674 | 5.97 | 84 | 1.3173 | 0.67 |
62
+ | 1.2969 | 6.97 | 98 | 1.1894 | 0.72 |
63
+ | 1.1472 | 7.96 | 112 | 1.0415 | 0.77 |
64
+ | 0.9815 | 8.96 | 126 | 1.0004 | 0.74 |
65
+ | 0.8838 | 9.96 | 140 | 0.8808 | 0.78 |
66
+ | 0.8294 | 10.95 | 154 | 0.8551 | 0.78 |
67
+ | 0.768 | 11.95 | 168 | 0.7939 | 0.79 |
68
+ | 0.6499 | 12.94 | 182 | 0.7467 | 0.81 |
69
+ | 0.6014 | 13.94 | 196 | 0.6995 | 0.82 |
70
+ | 0.5296 | 14.93 | 210 | 0.7152 | 0.79 |
71
+ | 0.4478 | 16.0 | 225 | 0.6561 | 0.83 |
72
+ | 0.4082 | 17.0 | 239 | 0.6399 | 0.84 |
73
+ | 0.374 | 17.99 | 253 | 0.6217 | 0.86 |
74
+ | 0.3282 | 18.99 | 267 | 0.5991 | 0.85 |
75
+ | 0.28 | 19.98 | 281 | 0.6043 | 0.84 |
76
+ | 0.2754 | 20.98 | 295 | 0.5831 | 0.87 |
77
+ | 0.2409 | 21.97 | 309 | 0.5680 | 0.85 |
78
+ | 0.2172 | 22.97 | 323 | 0.5729 | 0.85 |
79
+ | 0.1855 | 23.96 | 337 | 0.5645 | 0.86 |
80
+ | 0.1729 | 24.96 | 351 | 0.5576 | 0.86 |
81
+ | 0.161 | 25.96 | 365 | 0.5378 | 0.86 |
82
+ | 0.1586 | 26.95 | 379 | 0.5662 | 0.86 |
83
+ | 0.1452 | 27.95 | 393 | 0.5575 | 0.87 |
84
+ | 0.1444 | 28.94 | 407 | 0.5491 | 0.86 |
85
+ | 0.1343 | 29.87 | 420 | 0.5528 | 0.86 |
86
+
87
+
88
+ ### Framework versions
89
+
90
+ - Transformers 4.39.2
91
+ - Pytorch 1.13.0+cu117
92
+ - Datasets 2.16.1
93
+ - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:93deed8e6b6c410b358b51cc344437214a0b4f976fed3f5783d4cfae96a003b0
3
  size 94771680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:909151ab30366997f2a7a92974eef3c85189787337fcced26dfc4df1de0d3cb0
3
  size 94771680