Isaacgv commited on
Commit
26ed8f2
1 Parent(s): a4bb72e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -57
README.md CHANGED
@@ -5,24 +5,9 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
8
- metrics:
9
- - accuracy
10
  model-index:
11
  - name: distilhubert-finetuned-gtzan
12
- results:
13
- - task:
14
- name: Audio Classification
15
- type: audio-classification
16
- dataset:
17
- name: GTZAN
18
- type: marsyas/gtzan
19
- config: all
20
- split: train
21
- args: all
22
- metrics:
23
- - name: Accuracy
24
- type: accuracy
25
- value: 0.81
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -32,8 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.1842
36
- - Accuracy: 0.81
 
 
 
 
 
37
 
38
  ## Model description
39
 
@@ -52,51 +42,17 @@ More information needed
52
  ### Training hyperparameters
53
 
54
  The following hyperparameters were used during training:
55
- - learning_rate: 0.0001
56
- - train_batch_size: 32
57
- - eval_batch_size: 32
58
  - seed: 42
 
 
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
  - lr_scheduler_warmup_ratio: 0.1
62
  - num_epochs: 30
63
 
64
- ### Training results
65
-
66
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
67
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
68
- | 2.1069 | 1.0 | 29 | 2.0003 | 0.46 |
69
- | 1.8026 | 2.0 | 58 | 1.6073 | 0.59 |
70
- | 1.3938 | 3.0 | 87 | 1.2140 | 0.72 |
71
- | 1.0295 | 4.0 | 116 | 1.0740 | 0.64 |
72
- | 0.8339 | 5.0 | 145 | 0.9243 | 0.71 |
73
- | 0.6347 | 6.0 | 174 | 0.8837 | 0.72 |
74
- | 0.4137 | 7.0 | 203 | 0.8274 | 0.78 |
75
- | 0.3162 | 8.0 | 232 | 0.7596 | 0.82 |
76
- | 0.2055 | 9.0 | 261 | 0.8541 | 0.77 |
77
- | 0.2237 | 10.0 | 290 | 0.7220 | 0.78 |
78
- | 0.0601 | 11.0 | 319 | 0.7765 | 0.81 |
79
- | 0.0817 | 12.0 | 348 | 0.7603 | 0.86 |
80
- | 0.0196 | 13.0 | 377 | 0.8611 | 0.8 |
81
- | 0.0641 | 14.0 | 406 | 0.9281 | 0.8 |
82
- | 0.0253 | 15.0 | 435 | 1.2051 | 0.77 |
83
- | 0.0079 | 16.0 | 464 | 1.1073 | 0.81 |
84
- | 0.0055 | 17.0 | 493 | 1.0920 | 0.81 |
85
- | 0.012 | 18.0 | 522 | 1.1882 | 0.82 |
86
- | 0.0051 | 19.0 | 551 | 1.0023 | 0.81 |
87
- | 0.0047 | 20.0 | 580 | 1.2339 | 0.79 |
88
- | 0.0036 | 21.0 | 609 | 1.1471 | 0.79 |
89
- | 0.0033 | 22.0 | 638 | 1.1924 | 0.8 |
90
- | 0.0032 | 23.0 | 667 | 1.1064 | 0.81 |
91
- | 0.0028 | 24.0 | 696 | 1.1140 | 0.8 |
92
- | 0.0026 | 25.0 | 725 | 1.1344 | 0.81 |
93
- | 0.0163 | 26.0 | 754 | 1.1551 | 0.8 |
94
- | 0.0027 | 27.0 | 783 | 1.1843 | 0.81 |
95
- | 0.0025 | 28.0 | 812 | 1.1824 | 0.81 |
96
- | 0.0104 | 29.0 | 841 | 1.1636 | 0.8 |
97
- | 0.0047 | 30.0 | 870 | 1.1842 | 0.81 |
98
-
99
-
100
  ### Framework versions
101
 
102
  - Transformers 4.31.0
 
5
  - generated_from_trainer
6
  datasets:
7
  - marsyas/gtzan
 
 
8
  model-index:
9
  - name: distilhubert-finetuned-gtzan
10
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.6658
21
+ - eval_accuracy: 0.87
22
+ - eval_runtime: 45.4165
23
+ - eval_samples_per_second: 2.202
24
+ - eval_steps_per_second: 0.154
25
+ - epoch: 14.95
26
+ - step: 213
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - learning_rate: 5e-05
46
+ - train_batch_size: 16
47
+ - eval_batch_size: 16
48
  - seed: 42
49
+ - gradient_accumulation_steps: 4
50
+ - total_train_batch_size: 64
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
  - num_epochs: 30
55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ### Framework versions
57
 
58
  - Transformers 4.31.0