Edit model card

Model Card: Safe-CLIP

Safe-CLIP, introduced in the paper Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models, is an ehnanced vision-and-language model designed to mitigate the risks associated with NSFW (Not Safe For Work) content in AI applications.

Based on the CLIP model, Safe-CLIP is fine-tuned to serve the association between linguistic and visual concepts, ensuring safer outputs in text-to-image and image-to-text retrieval and generation tasks.

NSFW Definition

In our work, with inspiration taken from this paper, we define NSFW as a finite and fixed set concepts that are considered inappropriate, offensive, or harmful to individuals. These concepts are divided into seven categories: hate, harassment, violence, self-harm, sexual, shocking and illegal activities.

Use with Transformers

See the snippet below for usage with Transformers:

>>> from transformers import CLIPModel

>>> model_id = "aimagelab/safeclip_vit-l_14_336"
>>> model = CLIPModel.from_pretrained(model_id)

Model Details

Safe-CLIP is a fine-tuned version of CLIP model. The model fine-tuning is done through the ViSU (Visual Safe and Unsafe) Dataset, introduced in the same paper.

ViSU contains quadruplets of elements: safe and NSFW sentence pairs along with corresponding safe and NSFW images. You can find the text portion of ViSU Dataset publicly released on the HuggingFace ViSU-Text page. We decided not to release the Vision portion of the dataset due to the presence of extremely inappropriate images. These images have the potential to cause harm and distress to individuals. Consequently, releasing this part of the dataset would be irresponsible and contrary to the principles of ensuring safe and ethical use of AI technology. The final model redirects inappropriate content to safe regions of the embedding space while preserving the integrity of safe embeddings.

Variations Safe-CLIP comes in four versions to improve the compatibility across some of the most popular vision-and-language models employed for I2T and T2I generation tasks. More details are reported in the next table.

StableDiffusion compatibility LLaVA compatibility
safe-CLIP ViT-L-14 1.4 llama-2-13b-chat-lightning-preview
safe-CLIP ViT-L-14-336px - 1.5 - 1.6
safe-CLIP ViT-H-14 - -
safe-CLIP SD 2.0 2.0 -

Model Release Date 9 July 2024.

For more information about the model, training details, dataset, and evaluation, please refer to the paper. You can also find the donwstream-tasks example codes in the repository of the paper here.

Applications

Safe-CLIP can be employed in various applications where safety and appropriateness are critical, including cross-modal retrieval, text-to-image, and image-to-text generation. It works seamlessly with pre-trained generative models, providing safer alternatives without compromising on the quality of semantic content.

Downstream Use

More example codes in the official Safe-CLIP repo.

Zero-shot classification example

>>> from transformers import CLIPModel, CLIPProcessor
>>> from PIL import Image

>>> model_id = "aimagelab/safeclip_vit-l_14_336"

>>> model = CLIPModel.from_pretrained(model_id)
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14-336")

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True)

>>> outputs = clip(**inputs)
>>> logits_per_image = outputs.logits_per_image # this is the image-text similarity score
>>> probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities 

Citation

Please cite with the following BibTeX:

@article{poppi2024removing,
  title={{Safe-CLIP: Removing NSFW Concepts from Vision-and-Language Models}},
  author={Poppi, Samuele and Poppi, Tobia and Cocchi, Federico and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
  journal={arXiv preprint arXiv:2311.16254},
  year={2024}
}
Downloads last month
6
Safetensors
Model size
428M params
Tensor type
F32
·
Inference API (serverless) does not yet support transformers models for this pipeline type.

Collection including aimagelab/safeclip_vit-l_14_336