resnet-50-finetuned-omars3

This model is a fine-tuned version of microsoft/resnet-50 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7345
  • Accuracy: 0.7436

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.3928 1.0 11 1.3871 0.2308
1.3864 2.0 22 1.3820 0.3077
1.3694 3.0 33 1.3510 0.5385
1.3513 4.0 44 1.2942 0.4872
1.3067 5.0 55 1.1984 0.6154
1.2184 6.0 66 0.9974 0.6923
1.0967 7.0 77 0.7869 0.6667
0.9731 8.0 88 0.7923 0.7436
0.9506 9.0 99 0.7161 0.6667
0.7783 10.0 110 0.6736 0.6923
0.7072 11.0 121 0.6693 0.7436
0.6669 12.0 132 0.7203 0.6923
0.6579 13.0 143 0.6195 0.7949
0.6695 14.0 154 0.6395 0.7692
0.678 15.0 165 0.6870 0.7692
0.5919 16.0 176 0.6681 0.7692
0.5459 17.0 187 0.6895 0.7692
0.5635 18.0 198 0.6617 0.7692
0.5378 19.0 209 0.6401 0.7949
0.5105 20.0 220 0.7108 0.7692
0.4656 21.0 231 0.7267 0.7692
0.5338 22.0 242 0.7531 0.7436
0.4846 23.0 253 0.7103 0.7179
0.4212 24.0 264 0.7809 0.7436
0.4677 25.0 275 0.7825 0.7692
0.4496 26.0 286 0.8240 0.6923
0.3784 27.0 297 0.7563 0.7179
0.4949 28.0 308 0.6823 0.7692
0.4612 29.0 319 0.7542 0.6667
0.4491 30.0 330 0.7345 0.7436

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.13.0
  • Tokenizers 0.13.3
Downloads last month
62
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results