Edit model card

CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.

Training Information

  • Model Name: openai/clip-vit-base-patch32
  • Dataset: oxford-pets
  • Training Epochs: 4
  • Batch Size: 256
  • Learning Rate: 3e-6
  • Test Accuracy: 93.74%

Parameters Information

Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%

Bias, Risks, and Limitations

Refer to the original CLIP repository.

License

[MIT]

Downloads last month
56
Safetensors
Model size
151M params
Tensor type
F32
·

Dataset used to train DGurgurov/clip-vit-base-patch32-oxford-pets

Space using DGurgurov/clip-vit-base-patch32-oxford-pets 1