Edit model card

CLIP

Contrastive Language-Image Pretraining (CLIP) model pre-trained on LAION-2B at resolution 224x224. It was introduced in the paper Learning Transferable Visual Models From Natural Language Supervision and further reproduced in the follow-up paper Reproducible scaling laws for contrastive language-image learning. The weights were converted from the laion/CLIP-ViT-bigG-14-laion2B-39B-b160k presented in the OpenCLIP LAION-2B collections.

Downloads last month
3
Safetensors
Model size
2.54B params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including cs-giung/clip-vit-gigantic-patch14-laion2b