MalikIbrar commited on
Commit
3b3d974
1 Parent(s): 2dc546e

End of training

Browse files
README.md CHANGED
@@ -17,13 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
17
 
18
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
- - eval_loss: 2.3098
21
- - eval_accuracy: 0.22
22
- - eval_runtime: 49.5115
23
- - eval_samples_per_second: 2.02
24
- - eval_steps_per_second: 0.263
25
- - epoch: 0.98
26
- - step: 37
27
 
28
  ## Model description
29
 
@@ -42,21 +42,19 @@ More information needed
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
- - learning_rate: 7e-05
46
  - train_batch_size: 8
47
  - eval_batch_size: 8
48
  - seed: 42
49
- - gradient_accumulation_steps: 1.5
50
- - total_train_batch_size: 12.0
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
- - num_epochs: 13
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Framework versions
58
 
59
- - Transformers 4.38.0.dev0
60
  - Pytorch 2.1.0+cu121
61
  - Datasets 2.16.1
62
  - Tokenizers 0.15.1
 
17
 
18
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
19
  It achieves the following results on the evaluation set:
20
+ - eval_loss: 0.5861
21
+ - eval_accuracy: 0.87
22
+ - eval_runtime: 56.9025
23
+ - eval_samples_per_second: 1.757
24
+ - eval_steps_per_second: 0.228
25
+ - epoch: 12.0
26
+ - step: 1356
27
 
28
  ## Model description
29
 
 
42
  ### Training hyperparameters
43
 
44
  The following hyperparameters were used during training:
45
+ - learning_rate: 5e-05
46
  - train_batch_size: 8
47
  - eval_batch_size: 8
48
  - seed: 42
 
 
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_ratio: 0.1
52
+ - num_epochs: 15
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.35.2
58
  - Pytorch 2.1.0+cu121
59
  - Datasets 2.16.1
60
  - Tokenizers 0.15.1
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:48a4ed474c7841ee1ce5940339a3055e885e6c00e8e36a93351da63430aa949c
3
  size 94771728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:73c34882471e2fdcebcb346c835dd1ffcab1a3deeb861af007dcae0ca8df3514
3
  size 94771728
runs/Jan30_14-25-21_1a33fb1dc461/events.out.tfevents.1706624823.1a33fb1dc461.373.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:f656cac10d7f1447cda799813d5df6c643fba2f6c99cdd6c7d1895cb5895556b
3
- size 51773
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:973a8e233334099e88d7280470f940ac5e7623c27bd2ad762c25b97ecfbf2cf3
3
+ size 53657