Edit model card

Model Architecture Details

Architecture Overview

  • Architecture: ViT Tiny

Configuration

Attribute Value
Patch Size 16
Image Size 224
Num Layers 1
Attention Heads 4
Objective Function CrossEntropy

Performance

  • Validation Accuracy (Top 5): 0.33
  • Validation Accuracy: 0.16

Additional Resources

The model was trained using the library: ViT-Prisma.
For detailed metrics, plots, and further analysis of the model's training process, refer to the training report.

Downloads last month
0
Inference API
Drag image file here or click to browse from your device
Unable to determine this model's library. Check the docs .

Dataset used to train Prisma-Multimodal/ImageNet-Tiny-Attention-and-MLP-Patch16