File size: 353 Bytes
1cdda04
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18

# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset.

## Training Information

- **Model Name**: openai/clip-vit-base-patch32
- **Dataset**: oxford-pets
- **Training Epochs**: 4
- **Batch Size**: 256
- **Learning Rate**: 3e-6
- **Accuracy**: 93.74%


## License
[MIT]