initial_ViT_model / README.md
dhruvilHV's picture
End of training
817f29f
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - fair_face
metrics:
  - accuracy
model-index:
  - name: initial_ViT_model
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: fair_face
          type: fair_face
          config: '0.25'
          split: validation
          args: '0.25'
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.21252510498448055

initial_ViT_model

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the fair_face dataset. It achieves the following results on the evaluation set:

  • Loss: 3.6347
  • Accuracy: 0.2125

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss Accuracy
4.7855 0.15 50 4.6444 0.0511
4.4242 0.29 100 4.2124 0.1418
4.0596 0.44 150 3.9402 0.1744
3.859 0.59 200 3.7823 0.1956
3.7392 0.74 250 3.6877 0.2105
3.6424 0.88 300 3.6347 0.2125

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0