Robust CLIP
Collection
https://github.com/chs20/RobustVLM
•
4 items
•
Updated
•
2
FARE CLIP ViT-L/14 model.
Unsupervised adversarial fine-tuning from Openai CLIP initialization on ImageNet with infinity-norm and radius 2/255.
model, _, image_processor = open_clip.create_model_and_transforms('hf-hub:chs20/fare2-clip')
If you find this model useful, please consider citing our paper:
@article{schlarmann2024robustclip,
title={Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models},
author={Christian Schlarmann and Naman Deep Singh and Francesco Croce and Matthias Hein},
year={2024},
journal={ICML}
}