File size: 895 Bytes
7924ba8 3cb1ab7 7924ba8 28dca29 7924ba8 2a1c8d8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
base_model:
- openai/clip-vit-base-patch32
datasets:
- mnist
metrics:
- accuracy
---
# Model Card: tanganke/clip-vit-base-patch32_mnist
## Model Details
- Architecture: ViT-Base with patch size 32
- Training Data: MNIST dataset
## Training Details
Adam Optimizer with a constant learning rate 1e-5 for 4000 steps training (batch_size=32).
Only the vision encoder is fine-tuned.
## Evaluation Results
- pre-trained: 0.4759327471256256
- fine-tuned: 0.9957262277603149
## Usage
load vision model
```python
from transformers import CLIPVisionModel
vision_model = CLIPVisionModel.from_pretrained('tanganke/clip-vit-base-patch32_mnist')
```
substitute the vision encoder of clip
```python
from transformers import CLIPModel
clip_model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
clip_model.vision_model.load_state_dict(vision_model.vision_model.state_dict())
``` |