OpenCLIP (LAION-2B)
Collection
6 items
•
Updated
Contrastive Language-Image Pretraining (CLIP) model pre-trained on LAION-2B at resolution 224x224. It was introduced in the paper Learning Transferable Visual Models From Natural Language Supervision and further reproduced in the follow-up paper Reproducible scaling laws for contrastive language-image learning.
The weights were converted from the laion/CLIP-ViT-bigG-14-laion2B-39B-b160k
presented in the OpenCLIP LAION-2B collections.