Edit model card

[Paper] [GitHub]

FARE CLIP ViT-L/14 model.

Unsupervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 2/255.

model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/fare2-clip')
Downloads last month
4,358

Collection including chs20/fare2-clip