Edit model card

vit_cifar

Recently, the Vision Transformer (ViT), which applied the transformer structure to the image classification task, has outperformed convolutional neural networks. However, the high performance of the ViT results from pre-training using a large-size dataset such as JFT-300M, and its dependence on a large dataset is interpreted as due to low locality inductive bias. This paper proposes Shifted Patch Tokenization (SPT) and Locality Self-Attention (LSA), which effectively solve the lack of locality inductive bias and enable it to learn from scratch even on small-size datasets. Moreover I used a 2D sinusoidal positional embedding, global average pooling (no CLS token). This model is trained on CIFAR10 dataset. It achieves the following results on the evaluation set:

  • eval_loss: 0.6702
  • eval_accuracy: 0.8603
  • eval_runtime: 64.5616
  • eval_samples_per_second: 154.891
  • eval_steps_per_second: 0.62
  • epoch: 5.0
  • step: 980

Model description

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 256
  • eval_batch_size: 256
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 5
  • num_epochs: 10

Framework versions

  • Transformers 4.36.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
1
Safetensors
Model size
2.72M params
Tensor type
F32
ยท
Inference API (serverless) does not yet support model repos that contain custom code.

Finetuned from

Space using Manu8/vit_cifar 1