DavidFM43 commited on
Commit
d134949
1 Parent(s): 25475b4

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -12
README.md CHANGED
@@ -18,7 +18,7 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.6192
22
  - Accuracy: 0.83
23
 
24
  ## Model description
@@ -38,7 +38,7 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 0.00013888307813432008
42
  - train_batch_size: 4
43
  - eval_batch_size: 4
44
  - seed: 42
@@ -47,22 +47,26 @@ The following hyperparameters were used during training:
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
- - num_epochs: 5
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
- | 1.203 | 1.0 | 112 | 1.2880 | 0.56 |
57
- | 0.7766 | 2.0 | 225 | 0.9546 | 0.74 |
58
- | 0.7045 | 3.0 | 337 | 0.9271 | 0.69 |
59
- | 0.462 | 4.0 | 450 | 0.8588 | 0.79 |
60
- | 0.236 | 4.98 | 560 | 0.6192 | 0.83 |
 
 
 
 
61
 
62
 
63
  ### Framework versions
64
 
65
  - Transformers 4.30.2
66
- - Pytorch 2.0.0
67
- - Datasets 2.1.0
68
  - Tokenizers 0.13.3
 
18
 
19
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6925
22
  - Accuracy: 0.83
23
 
24
  ## Model description
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 0.0001115511981046745
42
  - train_batch_size: 4
43
  - eval_batch_size: 4
44
  - seed: 42
 
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 9
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Accuracy | Validation Loss |
55
+ |:-------------:|:-----:|:----:|:--------:|:---------------:|
56
+ | 1.278 | 1.0 | 112 | 0.57 | 1.3298 |
57
+ | 0.8315 | 2.0 | 225 | 0.73 | 0.9432 |
58
+ | 0.7709 | 3.0 | 337 | 0.72 | 0.9310 |
59
+ | 0.5427 | 4.0 | 450 | 0.72 | 0.8738 |
60
+ | 0.2645 | 4.98 | 560 | 0.79 | 0.6648 |
61
+ | 0.245 | 6.0 | 672 | 0.83 | 0.6147 |
62
+ | 0.1331 | 6.99 | 784 | 0.83 | 0.6305 |
63
+ | 0.1863 | 8.0 | 896 | 0.6356 | 0.84 |
64
+ | 0.0843 | 8.99 | 1008 | 0.6925 | 0.83 |
65
 
66
 
67
  ### Framework versions
68
 
69
  - Transformers 4.30.2
70
+ - Pytorch 2.0.1+cu117
71
+ - Datasets 2.13.1
72
  - Tokenizers 0.13.3