Edit model card

hubert-base-ls960-finetuned-gtzan

This model is a fine-tuned version of facebook/hubert-base-ls960 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1107
  • Accuracy: 0.85

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Accuracy Validation Loss
1.9975 1.0 225 0.47 1.8130
1.2415 2.0 450 0.57 1.3022
1.0225 3.0 675 0.645 1.1478
1.1012 4.0 900 0.755 0.8725
1.0753 5.0 1125 0.67 1.1645
0.5354 6.0 1350 0.66 1.3094
0.7805 7.0 1575 0.795 0.8406
0.3307 8.0 1800 0.795 0.9782
0.1861 9.0 2025 0.79 0.9140
0.2776 10.0 2250 0.795 1.1711
0.314 11.0 2475 0.825 0.9193
0.1785 12.0 2700 0.82 1.0272
0.1444 13.0 2925 0.845 0.9903
0.0122 14.0 3150 0.835 0.9974
0.0116 15.0 3375 0.85 0.9670
0.3403 31.0 3472 1.0085 0.85
0.3596 32.0 3585 1.3101 0.81
0.0242 33.0 3697 0.9612 0.86
0.1006 34.0 3810 1.1904 0.82
0.1034 35.0 3922 0.9582 0.86
0.195 36.0 4035 1.0223 0.84
0.0081 37.0 4147 1.2461 0.8
0.006 38.0 4260 0.9541 0.87
0.281 39.0 4372 0.9340 0.87
0.0491 40.0 4485 1.0942 0.85
0.0537 41.0 4597 1.1521 0.85
0.0017 42.0 4710 1.1738 0.85
0.0031 43.0 4822 1.1584 0.85
0.1107 44.0 4935 1.1503 0.86
0.0032 45.0 5047 1.0710 0.87
0.0027 46.0 5160 1.1310 0.86
0.0013 47.0 5272 1.1194 0.86
0.0023 48.0 5385 1.1173 0.85
0.0286 49.0 5497 1.1087 0.85
0.0133 49.91 5600 1.1107 0.85

Framework versions

  • Transformers 4.31.0
  • Pytorch 2.0.1
  • Datasets 2.15.0
  • Tokenizers 0.13.2
Downloads last month
1

Finetuned from