File size: 729 Bytes
e9434d7
 
 
 
 
 
56acfed
e9434d7
1cdda04
 
 
56acfed
1cdda04
 
 
 
 
 
 
 
56acfed
1cdda04
56acfed
 
 
 
 
 
 
1cdda04
 
e9434d7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: mit
datasets:
- pcuenq/oxford-pets
metrics:
- accuracy
pipeline_tag: image-classification
---

# CLIP ViT Base Patch32 Fine-tuned on Oxford Pets

This model is a fine-tuned version of OpenAI's CLIP model on the Oxford Pets dataset, intended for pets classification.

## Training Information

- **Model Name**: openai/clip-vit-base-patch32
- **Dataset**: oxford-pets
- **Training Epochs**: 4
- **Batch Size**: 256
- **Learning Rate**: 3e-6
- **Test Accuracy**: 93.74%

## Parameters Information

Trainable params: 151.2773M || All params: 151.2773M || Trainable%: 100.00%

## Bias, Risks, and Limitations

Refer to the original [CLIP repository](https://huggingface.co/openai/clip-vit-base-patch32).

## License
[MIT]