Edit model card

hushem_1x_deit_tiny_adamax_lr0001_fold4

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5599
  • Accuracy: 0.8333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.67 1 1.6749 0.3095
No log 2.0 3 1.3545 0.3333
No log 2.67 4 1.3451 0.2857
No log 4.0 6 1.2535 0.5238
No log 4.67 7 1.2290 0.4286
No log 6.0 9 1.1555 0.5
1.2457 6.67 10 1.0938 0.5
1.2457 8.0 12 0.9608 0.4762
1.2457 8.67 13 0.8825 0.5952
1.2457 10.0 15 0.7678 0.7143
1.2457 10.67 16 0.7184 0.7857
1.2457 12.0 18 0.6658 0.7619
1.2457 12.67 19 0.6361 0.7619
0.4167 14.0 21 0.6247 0.8095
0.4167 14.67 22 0.6111 0.7857
0.4167 16.0 24 0.5896 0.7857
0.4167 16.67 25 0.5886 0.7381
0.4167 18.0 27 0.6107 0.7619
0.4167 18.67 28 0.6198 0.7619
0.0627 20.0 30 0.6194 0.7619
0.0627 20.67 31 0.6092 0.7619
0.0627 22.0 33 0.5917 0.7857
0.0627 22.67 34 0.5871 0.7857
0.0627 24.0 36 0.5872 0.8095
0.0627 24.67 37 0.5896 0.8095
0.0627 26.0 39 0.5921 0.8095
0.0081 26.67 40 0.5908 0.8095
0.0081 28.0 42 0.5818 0.8095
0.0081 28.67 43 0.5772 0.8095
0.0081 30.0 45 0.5685 0.8095
0.0081 30.67 46 0.5654 0.8095
0.0081 32.0 48 0.5614 0.8333
0.0081 32.67 49 0.5603 0.8333
0.0038 33.33 50 0.5599 0.8333

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1
Downloads last month
1
Safetensors
Model size
5.53M params
Tensor type
F32
·

Finetuned from

Evaluation results