--- license: mit datasets: - pcuenq/oxford-pets metrics: - accuracy --- # CLIP ViT Base Patch32 Fine-tuned on Oxford Pets This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset. ## Training Information - **Model Name**: openai/clip-vit-base-patch32 - **Dataset**: oxford-pets - **Training Epochs**: 4 - **Batch Size**: 256 - **Learning Rate**: 3e-6 - **Accuracy**: 93.74% ## License [MIT]