cs-giung's picture
Update README.md
a41b263 verified
metadata
license: mit

CLIP

Contrastive Language-Image Pretraining (CLIP) model pre-trained on LAION-2B at resolution 224x224. It was introduced in the paper Learning Transferable Visual Models From Natural Language Supervision and further reproduced in the follow-up paper Reproducible scaling laws for contrastive language-image learning. The weights were converted from the laion/CLIP-ViT-H-14-laion2B-s32B-b79K presented in the OpenCLIP LAION-2B collections.