[Paper] [GitHub]
FARE CLIP ViT-L/14 model.
Unsupervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 2/255.
model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/fare2-clip')