DGurgurov's picture
Update README.md
56acfed verified
metadata
license: mit
datasets:
  - pcuenq/oxford-pets
metrics:
  - accuracy
pipeline_tag: image-classification

CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.

Training Information

  • Model Name: openai/clip-vit-base-patch32
  • Dataset: oxford-pets
  • Training Epochs: 4
  • Batch Size: 256
  • Learning Rate: 3e-6
  • Test Accuracy: 93.74%

Parameters Information

Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%

Bias, Risks, and Limitations

Refer to the original CLIP repository.

License

[MIT]