Chess_Images / README.md
kuynzang's picture
End of training
8409fd6 verified
|
raw
history blame
4.84 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: Chess_Images
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9333333333333333

Chess_Images

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2460
  • Accuracy: 0.9333

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 2 0.3365 0.9333
No log 2.0 4 0.3018 0.9333
No log 3.0 6 0.3443 0.9667
No log 4.0 8 0.2189 1.0
0.213 5.0 10 0.3188 0.9667
0.213 6.0 12 0.2903 0.9333
0.213 7.0 14 0.3398 0.9
0.213 8.0 16 0.3879 0.8667
0.213 9.0 18 0.3023 0.9333
0.2116 10.0 20 0.1857 1.0
0.2116 11.0 22 0.2737 0.9667
0.2116 12.0 24 0.2675 1.0
0.2116 13.0 26 0.2817 0.9333
0.2116 14.0 28 0.4394 0.8667
0.1837 15.0 30 0.3167 0.9
0.1837 16.0 32 0.2795 0.9333
0.1837 17.0 34 0.2315 0.9333
0.1837 18.0 36 0.2266 0.9667
0.1837 19.0 38 0.3199 0.9333
0.1726 20.0 40 0.2553 0.9667
0.1726 21.0 42 0.3804 0.9
0.1726 22.0 44 0.2118 0.9667
0.1726 23.0 46 0.1784 1.0
0.1726 24.0 48 0.2098 0.9667
0.1529 25.0 50 0.1676 1.0
0.1529 26.0 52 0.2980 0.9
0.1529 27.0 54 0.2726 0.9667
0.1529 28.0 56 0.1756 1.0
0.1529 29.0 58 0.2266 0.9667
0.1335 30.0 60 0.3161 0.9333
0.1335 31.0 62 0.2872 0.9333
0.1335 32.0 64 0.2030 1.0
0.1335 33.0 66 0.2297 0.9333
0.1335 34.0 68 0.2876 0.9333
0.1228 35.0 70 0.1432 1.0
0.1228 36.0 72 0.2194 0.9667
0.1228 37.0 74 0.1387 1.0
0.1228 38.0 76 0.1381 1.0
0.1228 39.0 78 0.1540 1.0
0.1324 40.0 80 0.3075 0.8667
0.1324 41.0 82 0.1892 1.0
0.1324 42.0 84 0.1487 1.0
0.1324 43.0 86 0.1515 1.0
0.1324 44.0 88 0.2617 0.9333
0.136 45.0 90 0.1719 0.9667
0.136 46.0 92 0.2501 0.9
0.136 47.0 94 0.1618 1.0
0.136 48.0 96 0.2175 0.9667
0.136 49.0 98 0.2039 0.9667
0.1226 50.0 100 0.2460 0.9333

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2