Edit model card

Model Architecture Details

Architecture Overview

  • Architecture: ViT Small

Configuration

Attribute Value
Patch Size 32
Image Size 224
Num Layers 3
Attention Heads 4
Objective Function CrossEntropy

Performance

  • Validation Accuracy (Top 5): 0.4179
  • Validation Accuracy: 0.2148

Additional Resources

The model was trained using the library: ViT-Prisma.
For detailed metrics, plots, and further analysis of the model's training process, refer to the training report.

Downloads last month
0
Inference API
Drag image file here or click to browse from your device
Unable to determine this model's library. Check the docs .

Dataset used to train PraneetNeuro/ImageNet-Small-Attention-and-MLP-Patch32