Edit model card

beit-base-patch16-224-hasta-65-fold1

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8281
  • Accuracy: 0.6944

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.5714 1 1.1951 0.3333
No log 1.7143 3 1.2409 0.3056
No log 2.8571 5 1.2181 0.3056
No log 4.0 7 1.1308 0.5
No log 4.5714 8 1.1858 0.5278
1.092 5.7143 10 1.2798 0.3056
1.092 6.8571 12 1.0892 0.3611
1.092 8.0 14 1.2658 0.3889
1.092 8.5714 15 1.1603 0.5
1.092 9.7143 17 1.0786 0.4167
1.092 10.8571 19 1.2400 0.4167
0.9727 12.0 21 1.2929 0.3889
0.9727 12.5714 22 1.1613 0.3889
0.9727 13.7143 24 0.9838 0.6111
0.9727 14.8571 26 1.2316 0.3889
0.9727 16.0 28 1.1520 0.4722
0.9727 16.5714 29 1.0345 0.5556
0.8844 17.7143 31 1.0000 0.5
0.8844 18.8571 33 0.9933 0.5278
0.8844 20.0 35 1.0512 0.5556
0.8844 20.5714 36 0.9950 0.5556
0.8844 21.7143 38 0.9621 0.4722
0.7447 22.8571 40 0.8812 0.5278
0.7447 24.0 42 1.0244 0.5833
0.7447 24.5714 43 1.0124 0.5833
0.7447 25.7143 45 0.8908 0.6389
0.7447 26.8571 47 0.8185 0.5833
0.7447 28.0 49 0.9409 0.5556
0.6176 28.5714 50 1.0401 0.5556
0.6176 29.7143 52 1.0989 0.5556
0.6176 30.8571 54 0.9102 0.5833
0.6176 32.0 56 0.8855 0.5833
0.6176 32.5714 57 0.8974 0.5556
0.6176 33.7143 59 0.9419 0.6111
0.4929 34.8571 61 0.9471 0.5833
0.4929 36.0 63 0.8609 0.5833
0.4929 36.5714 64 0.8558 0.5833
0.4929 37.7143 66 0.8449 0.5556
0.4929 38.8571 68 0.8136 0.6667
0.463 40.0 70 0.8281 0.6944
0.463 40.5714 71 0.8227 0.6944
0.463 41.7143 73 0.8323 0.5833
0.463 42.8571 75 0.8436 0.5833
0.463 44.0 77 0.8390 0.5833
0.463 44.5714 78 0.8580 0.6111
0.3995 45.7143 80 0.9375 0.6111
0.3995 46.8571 82 0.9897 0.5556
0.3995 48.0 84 0.9785 0.5556
0.3995 48.5714 85 0.9336 0.6389
0.3995 49.7143 87 0.8504 0.6389
0.3995 50.8571 89 0.8450 0.6667
0.3697 52.0 91 0.8531 0.6389
0.3697 52.5714 92 0.8728 0.6389
0.3697 53.7143 94 0.9076 0.6667
0.3697 54.8571 96 0.9175 0.6389
0.3697 56.0 98 0.9145 0.5833
0.3697 56.5714 99 0.9119 0.5833
0.3259 57.1429 100 0.9102 0.5833

Framework versions

  • Transformers 4.41.0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
0
Safetensors
Model size
85.8M params
Tensor type
F32
·

Finetuned from

Evaluation results