YCHuang2112 commited on
Commit
3092223
1 Parent(s): f031fd3

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -40
README.md CHANGED
@@ -8,20 +8,7 @@ metrics:
8
  - accuracy
9
  model-index:
10
  - name: distilhubert-finetuned-gtzan
11
- results:
12
- - task:
13
- name: Audio Classification
14
- type: audio-classification
15
- dataset:
16
- name: GTZAN
17
- type: marsyas/gtzan
18
- config: all
19
- split: train
20
- args: all
21
- metrics:
22
- - name: Accuracy
23
- type: accuracy
24
- value: 0.86
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.8411
35
- - Accuracy: 0.86
36
 
37
  ## Model description
38
 
@@ -55,40 +42,33 @@ The following hyperparameters were used during training:
55
  - train_batch_size: 8
56
  - eval_batch_size: 8
57
  - seed: 42
 
 
58
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_ratio: 0.1
61
- - num_epochs: 20
 
62
 
63
  ### Training results
64
 
65
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
- | 2.1621 | 1.0 | 113 | 2.0327 | 0.47 |
68
- | 1.4984 | 2.0 | 226 | 1.3717 | 0.69 |
69
- | 1.0376 | 3.0 | 339 | 1.0219 | 0.74 |
70
- | 0.8447 | 4.0 | 452 | 0.8923 | 0.76 |
71
- | 0.632 | 5.0 | 565 | 0.5939 | 0.79 |
72
- | 0.3592 | 6.0 | 678 | 0.6146 | 0.83 |
73
- | 0.408 | 7.0 | 791 | 0.4208 | 0.9 |
74
- | 0.0661 | 8.0 | 904 | 0.4568 | 0.88 |
75
- | 0.1336 | 9.0 | 1017 | 0.5712 | 0.86 |
76
- | 0.062 | 10.0 | 1130 | 0.6705 | 0.84 |
77
- | 0.0069 | 11.0 | 1243 | 0.6850 | 0.85 |
78
- | 0.1683 | 12.0 | 1356 | 0.6070 | 0.87 |
79
- | 0.0044 | 13.0 | 1469 | 0.8509 | 0.85 |
80
- | 0.0036 | 14.0 | 1582 | 0.8891 | 0.85 |
81
- | 0.0032 | 15.0 | 1695 | 0.6524 | 0.87 |
82
- | 0.0028 | 16.0 | 1808 | 0.8631 | 0.84 |
83
- | 0.1188 | 17.0 | 1921 | 0.8491 | 0.86 |
84
- | 0.0024 | 18.0 | 2034 | 0.7876 | 0.86 |
85
- | 0.0022 | 19.0 | 2147 | 0.7970 | 0.85 |
86
- | 0.0022 | 20.0 | 2260 | 0.8411 | 0.86 |
87
 
88
 
89
  ### Framework versions
90
 
91
- - Transformers 4.31.0.dev0
92
- - Pytorch 2.0.1+cu117
93
- - Datasets 2.13.1
94
  - Tokenizers 0.13.3
 
8
  - accuracy
9
  model-index:
10
  - name: distilhubert-finetuned-gtzan
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
18
 
19
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6949
22
+ - Accuracy: 0.88
23
 
24
  ## Model description
25
 
 
42
  - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 32
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 10
51
+ - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
56
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
57
+ | 0.1595 | 0.99 | 28 | 0.5827 | 0.86 |
58
+ | 0.122 | 1.98 | 56 | 0.5915 | 0.86 |
59
+ | 0.0598 | 2.97 | 84 | 0.6342 | 0.86 |
60
+ | 0.0233 | 4.0 | 113 | 0.6145 | 0.85 |
61
+ | 0.0163 | 4.99 | 141 | 0.6766 | 0.86 |
62
+ | 0.0125 | 5.98 | 169 | 0.6286 | 0.89 |
63
+ | 0.0091 | 6.97 | 197 | 0.7157 | 0.86 |
64
+ | 0.0088 | 8.0 | 226 | 0.6633 | 0.89 |
65
+ | 0.0074 | 8.99 | 254 | 0.7196 | 0.87 |
66
+ | 0.0074 | 9.91 | 280 | 0.6949 | 0.88 |
 
 
 
 
 
 
 
 
 
 
67
 
68
 
69
  ### Framework versions
70
 
71
+ - Transformers 4.28.0
72
+ - Pytorch 2.0.1+cu118
73
+ - Datasets 2.14.4
74
  - Tokenizers 0.13.3