--- license: cc-by-4.0 datasets: - UCSC-VLAA/Recap-DataComp-1B - mlfoundations/datacomp_1b library_name: open_clip --- [[Paper]](https://arxiv.org/abs/2501.09446) [[github]](https://github.com/zw615/Double_Visual_Defense) A DeltaCLIP-H/14-336 Model that is adversarially pre-trained with web-scale image-text data to reach non-robust-VLM helpfulness levels on clean data while being robust on adversarially attacked data. ## Model Usage ### With OpenCLIP ``` import torch import torch.nn.functional as F from urllib.request import urlopen from PIL import Image from open_clip import create_model_from_pretrained, get_tokenizer model, preprocess = create_model_from_pretrained('hf-hub:zw123/delta_clip_h14_336') tokenizer = get_tokenizer('hf-hub:zw123/delta_clip_h14_336') image = Image.open(urlopen( 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png' )) image = preprocess(image).unsqueeze(0) text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length) with torch.no_grad(), torch.cuda.amp.autocast(): image_features = model.encode_image(image) text_features = model.encode_text(text) image_features = F.normalize(image_features, dim=-1) text_features = F.normalize(text_features, dim=-1) text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1) print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]] ``` ## Release These models are released under the Creative Commons Attribution 4.0 license. LLNL-DATA-2003001 ## Citation If you find this model useful, please consider citing our paper: ```bibtex @article{wang2025double, title={Double Visual Defense: Adversarial Pre-training and Instruction Tuning for Improving Vision-Language Model Robustness}, author={Wang, Zeyu and Xie, Cihang and Bartoldson, Brian and Kailkhura, Bhavya}, journal={arXiv preprint arXiv:2501.09446}, year={2025} } ```