File size: 437 Bytes
752d523 |
1 2 3 4 5 6 7 8 9 10 11 12 |
CLIP model retrained over some subset of the DPC dataset
### Usage instructions
```
from transformers import AutoTokenizer, AutoModel, CLIPProcessor
tokenizer = AutoTokenizer.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique")
model = AutoModel.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique", from_flax=True)
processor = CLIPProcessor.from_pretrained("vicgalle/clip-vit-base-patch16-photo-critique")
```
|