hkivancoral's picture
End of training
95dd011
metadata
license: apache-2.0
base_model: microsoft/beit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_beit_base_sgd_00001_fold3
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.2558139534883721

hushem_1x_beit_base_sgd_00001_fold3

This model is a fine-tuned version of microsoft/beit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5773
  • Accuracy: 0.2558

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5860 0.2558
1.5832 2.0 12 1.5856 0.2558
1.5832 3.0 18 1.5851 0.2558
1.5961 4.0 24 1.5847 0.2558
1.5221 5.0 30 1.5843 0.2558
1.5221 6.0 36 1.5839 0.2558
1.5495 7.0 42 1.5835 0.2558
1.5495 8.0 48 1.5831 0.2558
1.5657 9.0 54 1.5828 0.2558
1.5842 10.0 60 1.5824 0.2558
1.5842 11.0 66 1.5821 0.2558
1.5665 12.0 72 1.5818 0.2558
1.5665 13.0 78 1.5815 0.2558
1.536 14.0 84 1.5812 0.2558
1.572 15.0 90 1.5809 0.2558
1.572 16.0 96 1.5807 0.2558
1.5843 17.0 102 1.5804 0.2558
1.5843 18.0 108 1.5802 0.2558
1.5423 19.0 114 1.5799 0.2558
1.5549 20.0 120 1.5797 0.2558
1.5549 21.0 126 1.5794 0.2558
1.5883 22.0 132 1.5792 0.2558
1.5883 23.0 138 1.5791 0.2558
1.5691 24.0 144 1.5789 0.2558
1.5489 25.0 150 1.5787 0.2558
1.5489 26.0 156 1.5785 0.2558
1.5874 27.0 162 1.5784 0.2558
1.5874 28.0 168 1.5782 0.2558
1.6141 29.0 174 1.5781 0.2558
1.5647 30.0 180 1.5780 0.2558
1.5647 31.0 186 1.5779 0.2558
1.5987 32.0 192 1.5778 0.2558
1.5987 33.0 198 1.5777 0.2558
1.504 34.0 204 1.5776 0.2558
1.5743 35.0 210 1.5775 0.2558
1.5743 36.0 216 1.5775 0.2558
1.5471 37.0 222 1.5774 0.2558
1.5471 38.0 228 1.5774 0.2558
1.5808 39.0 234 1.5774 0.2558
1.5531 40.0 240 1.5774 0.2558
1.5531 41.0 246 1.5773 0.2558
1.5447 42.0 252 1.5773 0.2558
1.5447 43.0 258 1.5773 0.2558
1.5547 44.0 264 1.5773 0.2558
1.5706 45.0 270 1.5773 0.2558
1.5706 46.0 276 1.5773 0.2558
1.569 47.0 282 1.5773 0.2558
1.569 48.0 288 1.5773 0.2558
1.5551 49.0 294 1.5773 0.2558
1.5471 50.0 300 1.5773 0.2558

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0