File size: 2,432 Bytes
4f28e67
 
 
 
7188231
 
 
4f28e67
 
7188231
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
---
tags:
- pytorch_model_hub_mixin
- model_hub_mixin
license: mit
datasets:
- mlfoundations/datacomp_1b
---

## Official implementation of pre-trained ViT-B/16 ProLIP on DataComp 1B

- This weight is a pre-trained ViT-B/16
- Pre-training dataset
    - DataComp 1B / Seen samples 12.8B

### Overview
- Paper: https://arxiv.org/abs/2410.18857
- GitHub: https://github.com/naver-ai/prolip
- More models are available at https://huggingface.co/collections/SanghyukChun/prolip-6712595dfc87fd8597350291

### Performance overview
- Zero-shot ImageNet-1k top-1 accuracy: 74.6%
- Zero-shot ImageNet distribution shifts: 63.0%
- Zero-shot VTAB performance: 63.7%
- Zero-shot retrieval performance: 59.6%
- Average zero-shot performance on 38 tasks: 63.3%

```python
import requests
from PIL import Image

import torch
from prolip.model import ProLIPHF
from transformers import CLIPProcessor
from prolip.tokenizer import HFTokenizer

import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)

processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch16")
model = ProLIPHF.from_pretrained("SanghyukChun/ProLIP-ViT-B-16-DC-1B-12_8M")
tokenizer = HFTokenizer("timm/ViT-B-16-SigLIP", context_length=64, clean="canonicalize")

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt", padding=True)
texts = ["A couple of cats laying on top of a pink blanket.", "A man walks through a flooded road during a rainstorm", "photo"]
texts = tokenizer(texts)

outputs = model(image=inputs["pixel_values"], text=texts)

l2_logit = outputs["image_features"]["mean"] @ outputs["text_features"]["mean"].T
i_unc = torch.exp(outputs["image_features"]["std"]).sum(dim=-1)
t_unc = torch.exp(outputs["text_features"]["std"]).sum(dim=-1)
csd_logit = l2_logit - 0.5 * t_unc
csd_logit2 = l2_logit.T - 0.5 * i_unc
print("Mean-only image-to-text logits (by L2 distance):", l2_logit)
print("Uncertainty-aware image-to-text logits (by CSD):", csd_logit)
print("Uncertainty-aware text-to-image logits (by CSD):", csd_logit2.T)
print("Image uncertainty: ", i_unc)
print("Text uncertainty: ", t_unc)
```

```
@article{chun2024prolip,
  title={Probabilistic Language-Image Pre-Training},
  author={Chun, Sanghyuk and Kim, Wonjae and Park, Song and Yun, Sangdoo},
  journal={arXiv preprint arXiv:2410.18857},
  year={2024}
}
```