yael-vinker commited on
Commit
253b0de
1 Parent(s): 3f0a2ea
CLIP_/LICENSE ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2021 OpenAI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
CLIP_/MANIFEST.in ADDED
@@ -0,0 +1 @@
 
 
1
+ include clip/bpe_simple_vocab_16e6.txt.gz
CLIP_/README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLIP
2
+
3
+ [[Blog]](https://openai.com/blog/clip/) [[Paper]](https://arxiv.org/abs/2103.00020) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/notebooks/Interacting_with_CLIP.ipynb)
4
+
5
+ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.
6
+
7
+
8
+
9
+ ## Approach
10
+
11
+ ![CLIP](CLIP.png)
12
+
13
+
14
+
15
+ ## Usage
16
+
17
+ First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:
18
+
19
+ ```bash
20
+ $ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
21
+ $ pip install ftfy regex tqdm
22
+ $ pip install git+https://github.com/openai/CLIP.git
23
+ ```
24
+
25
+ Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.
26
+
27
+ ```python
28
+ import torch
29
+ import clip
30
+ from PIL import Image
31
+
32
+ device = "cuda" if torch.cuda.is_available() else "cpu"
33
+ model, preprocess = clip.load("ViT-B/32", device=device)
34
+
35
+ image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
36
+ text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
37
+
38
+ with torch.no_grad():
39
+ image_features = model.encode_image(image)
40
+ text_features = model.encode_text(text)
41
+
42
+ logits_per_image, logits_per_text = model(image, text)
43
+ probs = logits_per_image.softmax(dim=-1).cpu().numpy()
44
+
45
+ print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
46
+ ```
47
+
48
+
49
+ ## API
50
+
51
+ The CLIP module `clip` provides the following methods:
52
+
53
+ #### `clip.available_models()`
54
+
55
+ Returns the names of the available CLIP models.
56
+
57
+ #### `clip.load(name, device=..., jit=True)`
58
+
59
+ Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The `name` argument can also be a path to a local checkpoint.
60
+
61
+ The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When `jit` is `False`, a non-JIT version of the model will be loaded.
62
+
63
+ #### `clip.tokenize(text: Union[str, List[str]], context_length=77)`
64
+
65
+ Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model
66
+
67
+ ---
68
+
69
+ The model returned by `clip.load()` supports the following methods:
70
+
71
+ #### `model.encode_image(image: Tensor)`
72
+
73
+ Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.
74
+
75
+ #### `model.encode_text(text: Tensor)`
76
+
77
+ Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.
78
+
79
+ #### `model(image: Tensor, text: Tensor)`
80
+
81
+ Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.
82
+
83
+
84
+
85
+ ## More Examples
86
+
87
+ ### Zero-Shot Prediction
88
+
89
+ The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.
90
+
91
+ ```python
92
+ import os
93
+ import clip
94
+ import torch
95
+ from torchvision.datasets import CIFAR100
96
+
97
+ # Load the model
98
+ device = "cuda" if torch.cuda.is_available() else "cpu"
99
+ model, preprocess = clip.load('ViT-B/32', device)
100
+
101
+ # Download the dataset
102
+ cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)
103
+
104
+ # Prepare the inputs
105
+ image, class_id = cifar100[3637]
106
+ image_input = preprocess(image).unsqueeze(0).to(device)
107
+ text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)
108
+
109
+ # Calculate features
110
+ with torch.no_grad():
111
+ image_features = model.encode_image(image_input)
112
+ text_features = model.encode_text(text_inputs)
113
+
114
+ # Pick the top 5 most similar labels for the image
115
+ image_features /= image_features.norm(dim=-1, keepdim=True)
116
+ text_features /= text_features.norm(dim=-1, keepdim=True)
117
+ similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
118
+ values, indices = similarity[0].topk(5)
119
+
120
+ # Print the result
121
+ print("\nTop predictions:\n")
122
+ for value, index in zip(values, indices):
123
+ print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")
124
+ ```
125
+
126
+ The output will look like the following (the exact numbers may be slightly different depending on the compute device):
127
+
128
+ ```
129
+ Top predictions:
130
+
131
+ snake: 65.31%
132
+ turtle: 12.29%
133
+ sweet_pepper: 3.83%
134
+ lizard: 1.88%
135
+ crocodile: 1.75%
136
+ ```
137
+
138
+ Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.
139
+
140
+
141
+ ### Linear-probe evaluation
142
+
143
+ The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.
144
+
145
+ ```python
146
+ import os
147
+ import clip
148
+ import torch
149
+
150
+ import numpy as np
151
+ from sklearn.linear_model import LogisticRegression
152
+ from torch.utils.data import DataLoader
153
+ from torchvision.datasets import CIFAR100
154
+ from tqdm import tqdm
155
+
156
+ # Load the model
157
+ device = "cuda" if torch.cuda.is_available() else "cpu"
158
+ model, preprocess = clip.load('ViT-B/32', device)
159
+
160
+ # Load the dataset
161
+ root = os.path.expanduser("~/.cache")
162
+ train = CIFAR100(root, download=True, train=True, transform=preprocess)
163
+ test = CIFAR100(root, download=True, train=False, transform=preprocess)
164
+
165
+
166
+ def get_features(dataset):
167
+ all_features = []
168
+ all_labels = []
169
+
170
+ with torch.no_grad():
171
+ for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
172
+ features = model.encode_image(images.to(device))
173
+
174
+ all_features.append(features)
175
+ all_labels.append(labels)
176
+
177
+ return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()
178
+
179
+ # Calculate the image features
180
+ train_features, train_labels = get_features(train)
181
+ test_features, test_labels = get_features(test)
182
+
183
+ # Perform logistic regression
184
+ classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
185
+ classifier.fit(train_features, train_labels)
186
+
187
+ # Evaluate using the logistic regression classifier
188
+ predictions = classifier.predict(test_features)
189
+ accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100.
190
+ print(f"Accuracy = {accuracy:.3f}")
191
+ ```
192
+
193
+ Note that the `C` value should be determined via a hyperparameter sweep using a validation split.
CLIP_/astronaut.png ADDED
CLIP_/clip/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .clip import *
CLIP_/clip/auxilary.py ADDED
@@ -0,0 +1,422 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import warnings
3
+ from typing import Tuple, Optional
4
+
5
+ import torch
6
+ from torch import Tensor
7
+ from torch.nn.init import xavier_uniform_
8
+ from torch.nn.init import constant_
9
+ from torch.nn.init import xavier_normal_
10
+ from torch.nn.parameter import Parameter
11
+ from torch.nn import functional as F
12
+
13
+ # We define this function as _pad because it takes an argument
14
+ # named pad, which clobbers the recursive reference to the pad
15
+ # function needed for __torch_function__ support
16
+ pad = F._pad
17
+
18
+ # This class exists solely for Transformer; it has an annotation stating
19
+ # that bias is never None, which appeases TorchScript
20
+ class _LinearWithBias(torch.nn.Linear):
21
+ bias: Tensor
22
+
23
+ def __init__(self, in_features: int, out_features: int) -> None:
24
+ super().__init__(in_features, out_features, bias=True)
25
+
26
+ def multi_head_attention_forward(query: Tensor,
27
+ key: Tensor,
28
+ value: Tensor,
29
+ embed_dim_to_check: int,
30
+ num_heads: int,
31
+ in_proj_weight: Tensor,
32
+ in_proj_bias: Tensor,
33
+ bias_k: Optional[Tensor],
34
+ bias_v: Optional[Tensor],
35
+ add_zero_attn: bool,
36
+ dropout_p: float,
37
+ out_proj_weight: Tensor,
38
+ out_proj_bias: Tensor,
39
+ training: bool = True,
40
+ key_padding_mask: Optional[Tensor] = None,
41
+ need_weights: bool = True,
42
+ attn_mask: Optional[Tensor] = None,
43
+ use_separate_proj_weight: bool = False,
44
+ q_proj_weight: Optional[Tensor] = None,
45
+ k_proj_weight: Optional[Tensor] = None,
46
+ v_proj_weight: Optional[Tensor] = None,
47
+ static_k: Optional[Tensor] = None,
48
+ static_v: Optional[Tensor] = None,
49
+ attention_probs_forward_hook = None,
50
+ attention_probs_backwards_hook = None,
51
+ ) -> Tuple[Tensor, Optional[Tensor]]:
52
+ if not torch.jit.is_scripting():
53
+ tens_ops = (query, key, value, in_proj_weight, in_proj_bias, bias_k, bias_v,
54
+ out_proj_weight, out_proj_bias)
55
+ if any([type(t) is not Tensor for t in tens_ops]) and F.has_torch_function(tens_ops):
56
+ return F.handle_torch_function(
57
+ multi_head_attention_forward, tens_ops, query, key, value,
58
+ embed_dim_to_check, num_heads, in_proj_weight, in_proj_bias,
59
+ bias_k, bias_v, add_zero_attn, dropout_p, out_proj_weight,
60
+ out_proj_bias, training=training, key_padding_mask=key_padding_mask,
61
+ need_weights=need_weights, attn_mask=attn_mask,
62
+ use_separate_proj_weight=use_separate_proj_weight,
63
+ q_proj_weight=q_proj_weight, k_proj_weight=k_proj_weight,
64
+ v_proj_weight=v_proj_weight, static_k=static_k, static_v=static_v)
65
+ tgt_len, bsz, embed_dim = query.size()
66
+ assert embed_dim == embed_dim_to_check
67
+ # allow MHA to have different sizes for the feature dimension
68
+ assert key.size(0) == value.size(0) and key.size(1) == value.size(1)
69
+
70
+ head_dim = embed_dim // num_heads
71
+ assert head_dim * num_heads == embed_dim, "embed_dim must be divisible by num_heads"
72
+ scaling = float(head_dim) ** -0.5
73
+
74
+ if not use_separate_proj_weight:
75
+ if torch.equal(query, key) and torch.equal(key, value):
76
+ # self-attention
77
+ q, k, v = F.linear(query, in_proj_weight, in_proj_bias).chunk(3, dim=-1)
78
+
79
+ elif torch.equal(key, value):
80
+ # encoder-decoder attention
81
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
82
+ _b = in_proj_bias
83
+ _start = 0
84
+ _end = embed_dim
85
+ _w = in_proj_weight[_start:_end, :]
86
+ if _b is not None:
87
+ _b = _b[_start:_end]
88
+ q = F.linear(query, _w, _b)
89
+
90
+ if key is None:
91
+ assert value is None
92
+ k = None
93
+ v = None
94
+ else:
95
+
96
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
97
+ _b = in_proj_bias
98
+ _start = embed_dim
99
+ _end = None
100
+ _w = in_proj_weight[_start:, :]
101
+ if _b is not None:
102
+ _b = _b[_start:]
103
+ k, v = F.linear(key, _w, _b).chunk(2, dim=-1)
104
+
105
+ else:
106
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
107
+ _b = in_proj_bias
108
+ _start = 0
109
+ _end = embed_dim
110
+ _w = in_proj_weight[_start:_end, :]
111
+ if _b is not None:
112
+ _b = _b[_start:_end]
113
+ q = F.linear(query, _w, _b)
114
+
115
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
116
+ _b = in_proj_bias
117
+ _start = embed_dim
118
+ _end = embed_dim * 2
119
+ _w = in_proj_weight[_start:_end, :]
120
+ if _b is not None:
121
+ _b = _b[_start:_end]
122
+ k = F.linear(key, _w, _b)
123
+
124
+ # This is inline in_proj function with in_proj_weight and in_proj_bias
125
+ _b = in_proj_bias
126
+ _start = embed_dim * 2
127
+ _end = None
128
+ _w = in_proj_weight[_start:, :]
129
+ if _b is not None:
130
+ _b = _b[_start:]
131
+ v = F.linear(value, _w, _b)
132
+ else:
133
+ q_proj_weight_non_opt = torch.jit._unwrap_optional(q_proj_weight)
134
+ len1, len2 = q_proj_weight_non_opt.size()
135
+ assert len1 == embed_dim and len2 == query.size(-1)
136
+
137
+ k_proj_weight_non_opt = torch.jit._unwrap_optional(k_proj_weight)
138
+ len1, len2 = k_proj_weight_non_opt.size()
139
+ assert len1 == embed_dim and len2 == key.size(-1)
140
+
141
+ v_proj_weight_non_opt = torch.jit._unwrap_optional(v_proj_weight)
142
+ len1, len2 = v_proj_weight_non_opt.size()
143
+ assert len1 == embed_dim and len2 == value.size(-1)
144
+
145
+ if in_proj_bias is not None:
146
+ q = F.linear(query, q_proj_weight_non_opt, in_proj_bias[0:embed_dim])
147
+ k = F.linear(key, k_proj_weight_non_opt, in_proj_bias[embed_dim:(embed_dim * 2)])
148
+ v = F.linear(value, v_proj_weight_non_opt, in_proj_bias[(embed_dim * 2):])
149
+ else:
150
+ q = F.linear(query, q_proj_weight_non_opt, in_proj_bias)
151
+ k = F.linear(key, k_proj_weight_non_opt, in_proj_bias)
152
+ v = F.linear(value, v_proj_weight_non_opt, in_proj_bias)
153
+ q = q * scaling
154
+
155
+ if attn_mask is not None:
156
+ assert attn_mask.dtype == torch.float32 or attn_mask.dtype == torch.float64 or \
157
+ attn_mask.dtype == torch.float16 or attn_mask.dtype == torch.uint8 or attn_mask.dtype == torch.bool, \
158
+ 'Only float, byte, and bool types are supported for attn_mask, not {}'.format(attn_mask.dtype)
159
+ if attn_mask.dtype == torch.uint8:
160
+ warnings.warn("Byte tensor for attn_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
161
+ attn_mask = attn_mask.to(torch.bool)
162
+
163
+ if attn_mask.dim() == 2:
164
+ attn_mask = attn_mask.unsqueeze(0)
165
+ if list(attn_mask.size()) != [1, query.size(0), key.size(0)]:
166
+ raise RuntimeError('The size of the 2D attn_mask is not correct.')
167
+ elif attn_mask.dim() == 3:
168
+ if list(attn_mask.size()) != [bsz * num_heads, query.size(0), key.size(0)]:
169
+ raise RuntimeError('The size of the 3D attn_mask is not correct.')
170
+ else:
171
+ raise RuntimeError("attn_mask's dimension {} is not supported".format(attn_mask.dim()))
172
+ # attn_mask's dim is 3 now.
173
+
174
+ # convert ByteTensor key_padding_mask to bool
175
+ if key_padding_mask is not None and key_padding_mask.dtype == torch.uint8:
176
+ warnings.warn("Byte tensor for key_padding_mask in nn.MultiheadAttention is deprecated. Use bool tensor instead.")
177
+ key_padding_mask = key_padding_mask.to(torch.bool)
178
+
179
+ if bias_k is not None and bias_v is not None:
180
+ if static_k is None and static_v is None:
181
+ k = torch.cat([k, bias_k.repeat(1, bsz, 1)])
182
+ v = torch.cat([v, bias_v.repeat(1, bsz, 1)])
183
+ if attn_mask is not None:
184
+ attn_mask = pad(attn_mask, (0, 1))
185
+ if key_padding_mask is not None:
186
+ key_padding_mask = pad(key_padding_mask, (0, 1))
187
+ else:
188
+ assert static_k is None, "bias cannot be added to static key."
189
+ assert static_v is None, "bias cannot be added to static value."
190
+ else:
191
+ assert bias_k is None
192
+ assert bias_v is None
193
+
194
+ q = q.contiguous().view(tgt_len, bsz * num_heads, head_dim).transpose(0, 1)
195
+ if k is not None:
196
+ k = k.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
197
+ if v is not None:
198
+ v = v.contiguous().view(-1, bsz * num_heads, head_dim).transpose(0, 1)
199
+
200
+ if static_k is not None:
201
+ assert static_k.size(0) == bsz * num_heads
202
+ assert static_k.size(2) == head_dim
203
+ k = static_k
204
+
205
+ if static_v is not None:
206
+ assert static_v.size(0) == bsz * num_heads
207
+ assert static_v.size(2) == head_dim
208
+ v = static_v
209
+
210
+ src_len = k.size(1)
211
+
212
+ if key_padding_mask is not None:
213
+ assert key_padding_mask.size(0) == bsz
214
+ assert key_padding_mask.size(1) == src_len
215
+
216
+ if add_zero_attn:
217
+ src_len += 1
218
+ k = torch.cat([k, torch.zeros((k.size(0), 1) + k.size()[2:], dtype=k.dtype, device=k.device)], dim=1)
219
+ v = torch.cat([v, torch.zeros((v.size(0), 1) + v.size()[2:], dtype=v.dtype, device=v.device)], dim=1)
220
+ if attn_mask is not None:
221
+ attn_mask = pad(attn_mask, (0, 1))
222
+ if key_padding_mask is not None:
223
+ key_padding_mask = pad(key_padding_mask, (0, 1))
224
+
225
+ attn_output_weights = torch.bmm(q, k.transpose(1, 2))
226
+ assert list(attn_output_weights.size()) == [bsz * num_heads, tgt_len, src_len]
227
+
228
+ if attn_mask is not None:
229
+ if attn_mask.dtype == torch.bool:
230
+ attn_output_weights.masked_fill_(attn_mask, float('-inf'))
231
+ else:
232
+ attn_output_weights += attn_mask
233
+
234
+
235
+ if key_padding_mask is not None:
236
+ attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
237
+ attn_output_weights = attn_output_weights.masked_fill(
238
+ key_padding_mask.unsqueeze(1).unsqueeze(2),
239
+ float('-inf'),
240
+ )
241
+ attn_output_weights = attn_output_weights.view(bsz * num_heads, tgt_len, src_len)
242
+
243
+ attn_output_weights = F.softmax(
244
+ attn_output_weights, dim=-1)
245
+ attn_output_weights = F.dropout(attn_output_weights, p=dropout_p, training=training)
246
+
247
+ # use hooks for the attention weights if necessary
248
+ if attention_probs_forward_hook is not None and attention_probs_backwards_hook is not None:
249
+ attention_probs_forward_hook(attn_output_weights)
250
+ attn_output_weights.register_hook(attention_probs_backwards_hook)
251
+
252
+ attn_output = torch.bmm(attn_output_weights, v)
253
+ assert list(attn_output.size()) == [bsz * num_heads, tgt_len, head_dim]
254
+ attn_output = attn_output.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim)
255
+ attn_output = F.linear(attn_output, out_proj_weight, out_proj_bias)
256
+
257
+ if need_weights:
258
+ # average attention weights over heads
259
+ attn_output_weights = attn_output_weights.view(bsz, num_heads, tgt_len, src_len)
260
+ return attn_output, attn_output_weights.sum(dim=1) / num_heads
261
+ else:
262
+ return attn_output, None
263
+
264
+
265
+ class MultiheadAttention(torch.nn.Module):
266
+ r"""Allows the model to jointly attend to information
267
+ from different representation subspaces.
268
+ See reference: Attention Is All You Need
269
+
270
+ .. math::
271
+ \text{MultiHead}(Q, K, V) = \text{Concat}(head_1,\dots,head_h)W^O
272
+ \text{where} head_i = \text{Attention}(QW_i^Q, KW_i^K, VW_i^V)
273
+
274
+ Args:
275
+ embed_dim: total dimension of the model.
276
+ num_heads: parallel attention heads.
277
+ dropout: a Dropout layer on attn_output_weights. Default: 0.0.
278
+ bias: add bias as module parameter. Default: True.
279
+ add_bias_kv: add bias to the key and value sequences at dim=0.
280
+ add_zero_attn: add a new batch of zeros to the key and
281
+ value sequences at dim=1.
282
+ kdim: total number of features in key. Default: None.
283
+ vdim: total number of features in value. Default: None.
284
+
285
+ Note: if kdim and vdim are None, they will be set to embed_dim such that
286
+ query, key, and value have the same number of features.
287
+
288
+ Examples::
289
+
290
+ >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)
291
+ >>> attn_output, attn_output_weights = multihead_attn(query, key, value)
292
+ """
293
+ bias_k: Optional[torch.Tensor]
294
+ bias_v: Optional[torch.Tensor]
295
+
296
+ def __init__(self, embed_dim, num_heads, dropout=0., bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None):
297
+ super(MultiheadAttention, self).__init__()
298
+ self.embed_dim = embed_dim
299
+ self.kdim = kdim if kdim is not None else embed_dim
300
+ self.vdim = vdim if vdim is not None else embed_dim
301
+ self._qkv_same_embed_dim = self.kdim == embed_dim and self.vdim == embed_dim
302
+
303
+ self.num_heads = num_heads
304
+ self.dropout = dropout
305
+ self.head_dim = embed_dim // num_heads
306
+ assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads"
307
+
308
+ if self._qkv_same_embed_dim is False:
309
+ self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim))
310
+ self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim))
311
+ self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim))
312
+ self.register_parameter('in_proj_weight', None)
313
+ else:
314
+ self.in_proj_weight = Parameter(torch.empty(3 * embed_dim, embed_dim))
315
+ self.register_parameter('q_proj_weight', None)
316
+ self.register_parameter('k_proj_weight', None)
317
+ self.register_parameter('v_proj_weight', None)
318
+
319
+ if bias:
320
+ self.in_proj_bias = Parameter(torch.empty(3 * embed_dim))
321
+ else:
322
+ self.register_parameter('in_proj_bias', None)
323
+ self.out_proj = _LinearWithBias(embed_dim, embed_dim)
324
+
325
+ if add_bias_kv:
326
+ self.bias_k = Parameter(torch.empty(1, 1, embed_dim))
327
+ self.bias_v = Parameter(torch.empty(1, 1, embed_dim))
328
+ else:
329
+ self.bias_k = self.bias_v = None
330
+
331
+ self.add_zero_attn = add_zero_attn
332
+
333
+ self._reset_parameters()
334
+
335
+ def _reset_parameters(self):
336
+ if self._qkv_same_embed_dim:
337
+ xavier_uniform_(self.in_proj_weight)
338
+ else:
339
+ xavier_uniform_(self.q_proj_weight)
340
+ xavier_uniform_(self.k_proj_weight)
341
+ xavier_uniform_(self.v_proj_weight)
342
+
343
+ if self.in_proj_bias is not None:
344
+ constant_(self.in_proj_bias, 0.)
345
+ constant_(self.out_proj.bias, 0.)
346
+ if self.bias_k is not None:
347
+ xavier_normal_(self.bias_k)
348
+ if self.bias_v is not None:
349
+ xavier_normal_(self.bias_v)
350
+
351
+ def __setstate__(self, state):
352
+ # Support loading old MultiheadAttention checkpoints generated by v1.1.0
353
+ if '_qkv_same_embed_dim' not in state:
354
+ state['_qkv_same_embed_dim'] = True
355
+
356
+ super(MultiheadAttention, self).__setstate__(state)
357
+
358
+ def forward(self, query, key, value, key_padding_mask=None,
359
+ need_weights=True, attn_mask=None, attention_probs_forward_hook=None, attention_probs_backwards_hook=None):
360
+ r"""
361
+ Args:
362
+ query, key, value: map a query and a set of key-value pairs to an output.
363
+ See "Attention Is All You Need" for more details.
364
+ key_padding_mask: if provided, specified padding elements in the key will
365
+ be ignored by the attention. When given a binary mask and a value is True,
366
+ the corresponding value on the attention layer will be ignored. When given
367
+ a byte mask and a value is non-zero, the corresponding value on the attention
368
+ layer will be ignored
369
+ need_weights: output attn_output_weights.
370
+ attn_mask: 2D or 3D mask that prevents attention to certain positions. A 2D mask will be broadcasted for all
371
+ the batches while a 3D mask allows to specify a different mask for the entries of each batch.
372
+
373
+ Shape:
374
+ - Inputs:
375
+ - query: :math:`(L, N, E)` where L is the target sequence length, N is the batch size, E is
376
+ the embedding dimension.
377
+ - key: :math:`(S, N, E)`, where S is the source sequence length, N is the batch size, E is
378
+ the embedding dimension.
379
+ - value: :math:`(S, N, E)` where S is the source sequence length, N is the batch size, E is
380
+ the embedding dimension.
381
+ - key_padding_mask: :math:`(N, S)` where N is the batch size, S is the source sequence length.
382
+ If a ByteTensor is provided, the non-zero positions will be ignored while the position
383
+ with the zero positions will be unchanged. If a BoolTensor is provided, the positions with the
384
+ value of ``True`` will be ignored while the position with the value of ``False`` will be unchanged.
385
+ - attn_mask: 2D mask :math:`(L, S)` where L is the target sequence length, S is the source sequence length.
386
+ 3D mask :math:`(N*num_heads, L, S)` where N is the batch size, L is the target sequence length,
387
+ S is the source sequence length. attn_mask ensure that position i is allowed to attend the unmasked
388
+ positions. If a ByteTensor is provided, the non-zero positions are not allowed to attend
389
+ while the zero positions will be unchanged. If a BoolTensor is provided, positions with ``True``
390
+ is not allowed to attend while ``False`` values will be unchanged. If a FloatTensor
391
+ is provided, it will be added to the attention weight.
392
+
393
+ - Outputs:
394
+ - attn_output: :math:`(L, N, E)` where L is the target sequence length, N is the batch size,
395
+ E is the embedding dimension.
396
+ - attn_output_weights: :math:`(N, L, S)` where N is the batch size,
397
+ L is the target sequence length, S is the source sequence length.
398
+ """
399
+ if not self._qkv_same_embed_dim:
400
+ return multi_head_attention_forward(
401
+ query, key, value, self.embed_dim, self.num_heads,
402
+ self.in_proj_weight, self.in_proj_bias,
403
+ self.bias_k, self.bias_v, self.add_zero_attn,
404
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
405
+ training=self.training,
406
+ key_padding_mask=key_padding_mask, need_weights=need_weights,
407
+ attn_mask=attn_mask, use_separate_proj_weight=True,
408
+ q_proj_weight=self.q_proj_weight, k_proj_weight=self.k_proj_weight,
409
+ v_proj_weight=self.v_proj_weight,
410
+ attention_probs_forward_hook=attention_probs_forward_hook,
411
+ attention_probs_backwards_hook=attention_probs_backwards_hook)
412
+ else:
413
+ return multi_head_attention_forward(
414
+ query, key, value, self.embed_dim, self.num_heads,
415
+ self.in_proj_weight, self.in_proj_bias,
416
+ self.bias_k, self.bias_v, self.add_zero_attn,
417
+ self.dropout, self.out_proj.weight, self.out_proj.bias,
418
+ training=self.training,
419
+ key_padding_mask=key_padding_mask, need_weights=need_weights,
420
+ attn_mask=attn_mask,
421
+ attention_probs_forward_hook=attention_probs_forward_hook,
422
+ attention_probs_backwards_hook=attention_probs_backwards_hook)
CLIP_/clip/clip.py ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import hashlib
2
+ import os
3
+ import urllib
4
+ import warnings
5
+ from typing import Union, List
6
+
7
+ import torch
8
+ from PIL import Image
9
+ from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
10
+ from tqdm import tqdm
11
+
12
+ from .model import build_model
13
+ from .simple_tokenizer import SimpleTokenizer as _Tokenizer
14
+
15
+ __all__ = ["available_models", "load", "tokenize"]
16
+ _tokenizer = _Tokenizer()
17
+
18
+ _MODELS = {
19
+ "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
20
+ "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
21
+ "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
22
+ "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
23
+ }
24
+
25
+
26
+ def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
27
+ os.makedirs(root, exist_ok=True)
28
+ filename = os.path.basename(url)
29
+
30
+ expected_sha256 = url.split("/")[-2]
31
+ download_target = os.path.join(root, filename)
32
+
33
+ if os.path.exists(download_target) and not os.path.isfile(download_target):
34
+ raise RuntimeError(f"{download_target} exists and is not a regular file")
35
+
36
+ if os.path.isfile(download_target):
37
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
38
+ return download_target
39
+ else:
40
+ warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
41
+
42
+ with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
43
+ with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
44
+ while True:
45
+ buffer = source.read(8192)
46
+ if not buffer:
47
+ break
48
+
49
+ output.write(buffer)
50
+ loop.update(len(buffer))
51
+
52
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
53
+ raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
54
+
55
+ return download_target
56
+
57
+
58
+ def _transform(n_px):
59
+ return Compose([
60
+ Resize(n_px, interpolation=Image.BICUBIC),
61
+ CenterCrop(n_px),
62
+ lambda image: image.convert("RGB"),
63
+ ToTensor(),
64
+ Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
65
+ ])
66
+
67
+
68
+ def available_models() -> List[str]:
69
+ """Returns the names of available CLIP models"""
70
+ return list(_MODELS.keys())
71
+
72
+
73
+ def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True):
74
+ """Load a CLIP model
75
+
76
+ Parameters
77
+ ----------
78
+ name : str
79
+ A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
80
+
81
+ device : Union[str, torch.device]
82
+ The device to put the loaded model
83
+
84
+ jit : bool
85
+ Whether to load the optimized JIT model (default) or more hackable non-JIT model.
86
+
87
+ Returns
88
+ -------
89
+ model : torch.nn.Module
90
+ The CLIP model
91
+
92
+ preprocess : Callable[[PIL.Image], torch.Tensor]
93
+ A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
94
+ """
95
+ if name in _MODELS:
96
+ model_path = _download(_MODELS[name])
97
+ elif os.path.isfile(name):
98
+ model_path = name
99
+ else:
100
+ raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
101
+
102
+ try:
103
+ # loading JIT archive
104
+ model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
105
+ state_dict = None
106
+ except RuntimeError:
107
+ # loading saved state dict
108
+ if jit:
109
+ warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
110
+ jit = False
111
+ state_dict = torch.load(model_path, map_location="cpu")
112
+
113
+ if not jit:
114
+ model = build_model(state_dict or model.state_dict()).to(device)
115
+ if str(device) == "cpu":
116
+ model.float()
117
+ return model, _transform(model.visual.input_resolution)
118
+
119
+ # patch the device names
120
+ device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
121
+ device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
122
+
123
+ def patch_device(module):
124
+ graphs = [module.graph] if hasattr(module, "graph") else []
125
+ if hasattr(module, "forward1"):
126
+ graphs.append(module.forward1.graph)
127
+
128
+ for graph in graphs:
129
+ for node in graph.findAllNodes("prim::Constant"):
130
+ if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
131
+ node.copyAttributes(device_node)
132
+
133
+ model.apply(patch_device)
134
+ patch_device(model.encode_image)
135
+ patch_device(model.encode_text)
136
+
137
+ # patch dtype to float32 on CPU
138
+ if str(device) == "cpu":
139
+ float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
140
+ float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
141
+ float_node = float_input.node()
142
+
143
+ def patch_float(module):
144
+ graphs = [module.graph] if hasattr(module, "graph") else []
145
+ if hasattr(module, "forward1"):
146
+ graphs.append(module.forward1.graph)
147
+
148
+ for graph in graphs:
149
+ for node in graph.findAllNodes("aten::to"):
150
+ inputs = list(node.inputs())
151
+ for i in [1, 2]: # dtype can be the second or third argument to aten::to()
152
+ if inputs[i].node()["value"] == 5:
153
+ inputs[i].node().copyAttributes(float_node)
154
+
155
+ model.apply(patch_float)
156
+ patch_float(model.encode_image)
157
+ patch_float(model.encode_text)
158
+
159
+ model.float()
160
+
161
+ return model, _transform(model.input_resolution.item())
162
+
163
+
164
+ def tokenize(texts: Union[str, List[str]], context_length: int = 77) -> torch.LongTensor:
165
+ """
166
+ Returns the tokenized representation of given input string(s)
167
+
168
+ Parameters
169
+ ----------
170
+ texts : Union[str, List[str]]
171
+ An input string or a list of input strings to tokenize
172
+
173
+ context_length : int
174
+ The context length to use; all CLIP models use 77 as the context length
175
+
176
+ Returns
177
+ -------
178
+ A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
179
+ """
180
+ if isinstance(texts, str):
181
+ texts = [texts]
182
+
183
+ sot_token = _tokenizer.encoder["<|startoftext|>"]
184
+ eot_token = _tokenizer.encoder["<|endoftext|>"]
185
+ all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
186
+ result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
187
+
188
+ for i, tokens in enumerate(all_tokens):
189
+ if len(tokens) > context_length:
190
+ raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
191
+ result[i, :len(tokens)] = torch.tensor(tokens)
192
+
193
+ return result
CLIP_/clip/model.py ADDED
@@ -0,0 +1,442 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import OrderedDict
2
+ from typing import Tuple, Union
3
+
4
+ import numpy as np
5
+ import torch
6
+ import torch.nn.functional as F
7
+ from torch import nn
8
+ from .auxilary import *
9
+
10
+ class Bottleneck(nn.Module):
11
+ expansion = 4
12
+
13
+ def __init__(self, inplanes, planes, stride=1):
14
+ super().__init__()
15
+
16
+ # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
17
+ self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
18
+ self.bn1 = nn.BatchNorm2d(planes)
19
+
20
+ self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
21
+ self.bn2 = nn.BatchNorm2d(planes)
22
+
23
+ self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
24
+
25
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
26
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion)
27
+
28
+ self.relu = nn.ReLU(inplace=True)
29
+ self.downsample = None
30
+ self.stride = stride
31
+
32
+ if stride > 1 or inplanes != planes * Bottleneck.expansion:
33
+ # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
34
+ self.downsample = nn.Sequential(OrderedDict([
35
+ ("-1", nn.AvgPool2d(stride)),
36
+ ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
37
+ ("1", nn.BatchNorm2d(planes * self.expansion))
38
+ ]))
39
+
40
+ def forward(self, x: torch.Tensor):
41
+ identity = x
42
+
43
+ out = self.relu(self.bn1(self.conv1(x)))
44
+ out = self.relu(self.bn2(self.conv2(out)))
45
+ out = self.avgpool(out)
46
+ out = self.bn3(self.conv3(out))
47
+
48
+ if self.downsample is not None:
49
+ identity = self.downsample(x)
50
+
51
+ out += identity
52
+ out = self.relu(out)
53
+ return out
54
+
55
+
56
+ class AttentionPool2d(nn.Module):
57
+ def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
58
+ super().__init__()
59
+ self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
60
+ self.k_proj = nn.Linear(embed_dim, embed_dim)
61
+ self.q_proj = nn.Linear(embed_dim, embed_dim)
62
+ self.v_proj = nn.Linear(embed_dim, embed_dim)
63
+ self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
64
+ self.num_heads = num_heads
65
+
66
+ def forward(self, x):
67
+ x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
68
+ x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
69
+ x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
70
+ x, _ = multi_head_attention_forward(
71
+ query=x, key=x, value=x,
72
+ embed_dim_to_check=x.shape[-1],
73
+ num_heads=self.num_heads,
74
+ q_proj_weight=self.q_proj.weight,
75
+ k_proj_weight=self.k_proj.weight,
76
+ v_proj_weight=self.v_proj.weight,
77
+ in_proj_weight=None,
78
+ in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
79
+ bias_k=None,
80
+ bias_v=None,
81
+ add_zero_attn=False,
82
+ dropout_p=0,
83
+ out_proj_weight=self.c_proj.weight,
84
+ out_proj_bias=self.c_proj.bias,
85
+ use_separate_proj_weight=True,
86
+ training=self.training,
87
+ need_weights=False
88
+ )
89
+
90
+ return x[0]
91
+
92
+
93
+ class ModifiedResNet(nn.Module):
94
+ """
95
+ A ResNet class that is similar to torchvision's but contains the following changes:
96
+ - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
97
+ - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
98
+ - The final pooling layer is a QKV attention instead of an average pool
99
+ """
100
+
101
+ def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
102
+ super().__init__()
103
+ self.output_dim = output_dim
104
+ self.input_resolution = input_resolution
105
+
106
+ # the 3-layer stem
107
+ self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
108
+ self.bn1 = nn.BatchNorm2d(width // 2)
109
+ self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
110
+ self.bn2 = nn.BatchNorm2d(width // 2)
111
+ self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
112
+ self.bn3 = nn.BatchNorm2d(width)
113
+ self.avgpool = nn.AvgPool2d(2)
114
+ self.relu = nn.ReLU(inplace=True)
115
+
116
+ # residual layers
117
+ self._inplanes = width # this is a *mutable* variable used during construction
118
+ self.layer1 = self._make_layer(width, layers[0])
119
+ self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
120
+ self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
121
+ self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
122
+
123
+ embed_dim = width * 32 # the ResNet feature dimension
124
+ self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
125
+
126
+ def _make_layer(self, planes, blocks, stride=1):
127
+ layers = [Bottleneck(self._inplanes, planes, stride)]
128
+
129
+ self._inplanes = planes * Bottleneck.expansion
130
+ for _ in range(1, blocks):
131
+ layers.append(Bottleneck(self._inplanes, planes))
132
+
133
+ return nn.Sequential(*layers)
134
+
135
+ def forward(self, x):
136
+ def stem(x):
137
+ for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
138
+ x = self.relu(bn(conv(x)))
139
+ x = self.avgpool(x)
140
+ return x
141
+
142
+ x = x.type(self.conv1.weight.dtype)
143
+ x = stem(x)
144
+ x = self.layer1(x)
145
+ x = self.layer2(x)
146
+ x = self.layer3(x)
147
+ x = self.layer4(x)
148
+ x = self.attnpool(x)
149
+
150
+ return x
151
+
152
+
153
+ class LayerNorm(nn.LayerNorm):
154
+ """Subclass torch's LayerNorm to handle fp16."""
155
+
156
+ def forward(self, x: torch.Tensor):
157
+ orig_type = x.dtype
158
+ ret = super().forward(x.type(torch.float32))
159
+ return ret.type(orig_type)
160
+
161
+
162
+ class QuickGELU(nn.Module):
163
+ def forward(self, x: torch.Tensor):
164
+ return x * torch.sigmoid(1.702 * x)
165
+
166
+
167
+ class ResidualAttentionBlock(nn.Module):
168
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
169
+ super().__init__()
170
+
171
+ self.attn = MultiheadAttention(d_model, n_head)
172
+ self.ln_1 = LayerNorm(d_model)
173
+ self.mlp = nn.Sequential(OrderedDict([
174
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
175
+ ("gelu", QuickGELU()),
176
+ ("c_proj", nn.Linear(d_model * 4, d_model))
177
+ ]))
178
+ self.ln_2 = LayerNorm(d_model)
179
+ self.attn_mask = attn_mask
180
+
181
+ self.attn_probs = None
182
+ self.attn_grad = None
183
+
184
+ def set_attn_probs(self, attn_probs):
185
+ self.attn_probs = attn_probs
186
+
187
+ def set_attn_grad(self, attn_grad):
188
+ self.attn_grad = attn_grad
189
+
190
+ def attention(self, x: torch.Tensor):
191
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
192
+ return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask, attention_probs_forward_hook=self.set_attn_probs,
193
+ attention_probs_backwards_hook=self.set_attn_grad)[0]
194
+
195
+ def forward(self, x: torch.Tensor):
196
+ x = x + self.attention(self.ln_1(x))
197
+ x = x + self.mlp(self.ln_2(x))
198
+ return x
199
+
200
+
201
+ class Transformer(nn.Module):
202
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
203
+ super().__init__()
204
+ self.width = width
205
+ self.layers = layers
206
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
207
+
208
+ def forward(self, x: torch.Tensor):
209
+ return self.resblocks(x)
210
+
211
+
212
+ class VisualTransformer(nn.Module):
213
+ def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
214
+ super().__init__()
215
+ self.input_resolution = input_resolution
216
+ self.output_dim = output_dim
217
+ self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
218
+
219
+ scale = width ** -0.5
220
+ self.class_embedding = nn.Parameter(scale * torch.randn(width))
221
+ self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
222
+ self.ln_pre = LayerNorm(width)
223
+
224
+ self.transformer = Transformer(width, layers, heads)
225
+
226
+ self.ln_post = LayerNorm(width)
227
+ self.proj = nn.Parameter(scale * torch.randn(width, output_dim))
228
+
229
+ def forward(self, x: torch.Tensor):
230
+ x = self.conv1(x) # shape = [*, width, grid, grid]
231
+ x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
232
+ x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
233
+ x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
234
+ x = x + self.positional_embedding.to(x.dtype)
235
+ x = self.ln_pre(x)
236
+
237
+ x = x.permute(1, 0, 2) # NLD -> LND
238
+ x = self.transformer(x)
239
+ x = x.permute(1, 0, 2) # LND -> NLD
240
+
241
+ x = self.ln_post(x[:, 0, :])
242
+
243
+ if self.proj is not None:
244
+ x = x @ self.proj
245
+
246
+ return x
247
+
248
+
249
+ class CLIP(nn.Module):
250
+ def __init__(self,
251
+ embed_dim: int,
252
+ # vision
253
+ image_resolution: int,
254
+ vision_layers: Union[Tuple[int, int, int, int], int],
255
+ vision_width: int,
256
+ vision_patch_size: int,
257
+ # text
258
+ context_length: int,
259
+ vocab_size: int,
260
+ transformer_width: int,
261
+ transformer_heads: int,
262
+ transformer_layers: int
263
+ ):
264
+ super().__init__()
265
+
266
+ self.context_length = context_length
267
+
268
+ if isinstance(vision_layers, (tuple, list)):
269
+ vision_heads = vision_width * 32 // 64
270
+ self.visual = ModifiedResNet(
271
+ layers=vision_layers,
272
+ output_dim=embed_dim,
273
+ heads=vision_heads,
274
+ input_resolution=image_resolution,
275
+ width=vision_width
276
+ )
277
+ else:
278
+ vision_heads = vision_width // 64
279
+ self.visual = VisualTransformer(
280
+ input_resolution=image_resolution,
281
+ patch_size=vision_patch_size,
282
+ width=vision_width,
283
+ layers=vision_layers,
284
+ heads=vision_heads,
285
+ output_dim=embed_dim
286
+ )
287
+
288
+ self.transformer = Transformer(
289
+ width=transformer_width,
290
+ layers=transformer_layers,
291
+ heads=transformer_heads,
292
+ attn_mask=self.build_attention_mask()
293
+ )
294
+
295
+ self.vocab_size = vocab_size
296
+ self.token_embedding = nn.Embedding(vocab_size, transformer_width)
297
+ self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
298
+ self.ln_final = LayerNorm(transformer_width)
299
+
300
+ self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
301
+ self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07))
302
+
303
+ self.initialize_parameters()
304
+
305
+ def initialize_parameters(self):
306
+ nn.init.normal_(self.token_embedding.weight, std=0.02)
307
+ nn.init.normal_(self.positional_embedding, std=0.01)
308
+
309
+ if isinstance(self.visual, ModifiedResNet):
310
+ if self.visual.attnpool is not None:
311
+ std = self.visual.attnpool.c_proj.in_features ** -0.5
312
+ nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std)
313
+ nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std)
314
+ nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std)
315
+ nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std)
316
+
317
+ for resnet_block in [self.visual.layer1, self.visual.layer2, self.visual.layer3, self.visual.layer4]:
318
+ for name, param in resnet_block.named_parameters():
319
+ if name.endswith("bn3.weight"):
320
+ nn.init.zeros_(param)
321
+
322
+ proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5)
323
+ attn_std = self.transformer.width ** -0.5
324
+ fc_std = (2 * self.transformer.width) ** -0.5
325
+ for block in self.transformer.resblocks:
326
+ nn.init.normal_(block.attn.in_proj_weight, std=attn_std)
327
+ nn.init.normal_(block.attn.out_proj.weight, std=proj_std)
328
+ nn.init.normal_(block.mlp.c_fc.weight, std=fc_std)
329
+ nn.init.normal_(block.mlp.c_proj.weight, std=proj_std)
330
+
331
+ if self.text_projection is not None:
332
+ nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5)
333
+
334
+ def build_attention_mask(self):
335
+ # lazily create causal attention mask, with full attention between the vision tokens
336
+ # pytorch uses additive attention mask; fill with -inf
337
+ mask = torch.empty(self.context_length, self.context_length)
338
+ mask.fill_(float("-inf"))
339
+ mask.triu_(1) # zero out the lower diagonal
340
+ return mask
341
+
342
+ @property
343
+ def dtype(self):
344
+ return self.visual.conv1.weight.dtype
345
+
346
+ def encode_image(self, image):
347
+ return self.visual(image.type(self.dtype))
348
+
349
+ def encode_text(self, text):
350
+ x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
351
+
352
+ x = x + self.positional_embedding.type(self.dtype)
353
+ x = x.permute(1, 0, 2) # NLD -> LND
354
+ x = self.transformer(x)
355
+ x = x.permute(1, 0, 2) # LND -> NLD
356
+ x = self.ln_final(x).type(self.dtype)
357
+
358
+ # x.shape = [batch_size, n_ctx, transformer.width]
359
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
360
+ x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
361
+
362
+ return x
363
+
364
+ def forward(self, image, text):
365
+ image_features = self.encode_image(image)
366
+ text_features = self.encode_text(text)
367
+
368
+ # normalized features
369
+ image_features = image_features / image_features.norm(dim=-1, keepdim=True)
370
+ text_features = text_features / text_features.norm(dim=-1, keepdim=True)
371
+
372
+ # cosine similarity as logits
373
+ logit_scale = self.logit_scale.exp()
374
+ logits_per_image = logit_scale * image_features @ text_features.t()
375
+ logits_per_text = logit_scale * text_features @ image_features.t()
376
+
377
+ # shape = [global_batch_size, global_batch_size]
378
+ return logits_per_image, logits_per_text
379
+
380
+
381
+ def convert_weights(model: nn.Module):
382
+ """Convert applicable model parameters to fp16"""
383
+
384
+ def _convert_weights_to_fp16(l):
385
+ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
386
+ l.weight.data = l.weight.data.half()
387
+ if l.bias is not None:
388
+ l.bias.data = l.bias.data.half()
389
+
390
+ if isinstance(l, MultiheadAttention):
391
+ for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
392
+ tensor = getattr(l, attr)
393
+ if tensor is not None:
394
+ tensor.data = tensor.data.half()
395
+
396
+ for name in ["text_projection", "proj"]:
397
+ if hasattr(l, name):
398
+ attr = getattr(l, name)
399
+ if attr is not None:
400
+ attr.data = attr.data.half()
401
+
402
+ model.apply(_convert_weights_to_fp16)
403
+
404
+
405
+ def build_model(state_dict: dict):
406
+ vit = "visual.proj" in state_dict
407
+
408
+ if vit:
409
+ vision_width = state_dict["visual.conv1.weight"].shape[0]
410
+ vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
411
+ vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
412
+ grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
413
+ image_resolution = vision_patch_size * grid_size
414
+ else:
415
+ counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
416
+ vision_layers = tuple(counts)
417
+ vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
418
+ output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
419
+ vision_patch_size = None
420
+ assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
421
+ image_resolution = output_width * 32
422
+
423
+ embed_dim = state_dict["text_projection"].shape[1]
424
+ context_length = state_dict["positional_embedding"].shape[0]
425
+ vocab_size = state_dict["token_embedding.weight"].shape[0]
426
+ transformer_width = state_dict["ln_final.weight"].shape[0]
427
+ transformer_heads = transformer_width // 64
428
+ transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
429
+
430
+ model = CLIP(
431
+ embed_dim,
432
+ image_resolution, vision_layers, vision_width, vision_patch_size,
433
+ context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
434
+ )
435
+
436
+ for key in ["input_resolution", "context_length", "vocab_size"]:
437
+ if key in state_dict:
438
+ del state_dict[key]
439
+
440
+ convert_weights(model)
441
+ model.load_state_dict(state_dict)
442
+ return model.eval()
CLIP_/clip/simple_tokenizer.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import html
3
+ import os
4
+ from functools import lru_cache
5
+
6
+ import ftfy
7
+ import regex as re
8
+
9
+
10
+ @lru_cache()
11
+ def default_bpe():
12
+ return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
13
+
14
+
15
+ @lru_cache()
16
+ def bytes_to_unicode():
17
+ """
18
+ Returns list of utf-8 byte and a corresponding list of unicode strings.
19
+ The reversible bpe codes work on unicode strings.
20
+ This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
21
+ When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
22
+ This is a signficant percentage of your normal, say, 32K bpe vocab.
23
+ To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
24
+ And avoids mapping to whitespace/control characters the bpe code barfs on.
25
+ """
26
+ bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
27
+ cs = bs[:]
28
+ n = 0
29
+ for b in range(2**8):
30
+ if b not in bs:
31
+ bs.append(b)
32
+ cs.append(2**8+n)
33
+ n += 1
34
+ cs = [chr(n) for n in cs]
35
+ return dict(zip(bs, cs))
36
+
37
+
38
+ def get_pairs(word):
39
+ """Return set of symbol pairs in a word.
40
+ Word is represented as tuple of symbols (symbols being variable-length strings).
41
+ """
42
+ pairs = set()
43
+ prev_char = word[0]
44
+ for char in word[1:]:
45
+ pairs.add((prev_char, char))
46
+ prev_char = char
47
+ return pairs
48
+
49
+
50
+ def basic_clean(text):
51
+ text = ftfy.fix_text(text)
52
+ text = html.unescape(html.unescape(text))
53
+ return text.strip()
54
+
55
+
56
+ def whitespace_clean(text):
57
+ text = re.sub(r'\s+', ' ', text)
58
+ text = text.strip()
59
+ return text
60
+
61
+
62
+ class SimpleTokenizer(object):
63
+ def __init__(self, bpe_path: str = default_bpe()):
64
+ self.byte_encoder = bytes_to_unicode()
65
+ self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
66
+ merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
67
+ merges = merges[1:49152-256-2+1]
68
+ merges = [tuple(merge.split()) for merge in merges]
69
+ vocab = list(bytes_to_unicode().values())
70
+ vocab = vocab + [v+'</w>' for v in vocab]
71
+ for merge in merges:
72
+ vocab.append(''.join(merge))
73
+ vocab.extend(['<|startoftext|>', '<|endoftext|>'])
74
+ self.encoder = dict(zip(vocab, range(len(vocab))))
75
+ self.decoder = {v: k for k, v in self.encoder.items()}
76
+ self.bpe_ranks = dict(zip(merges, range(len(merges))))
77
+ self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
78
+ self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
79
+
80
+ def bpe(self, token):
81
+ if token in self.cache:
82
+ return self.cache[token]
83
+ word = tuple(token[:-1]) + ( token[-1] + '</w>',)
84
+ pairs = get_pairs(word)
85
+
86
+ if not pairs:
87
+ return token+'</w>'
88
+
89
+ while True:
90
+ bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
91
+ if bigram not in self.bpe_ranks:
92
+ break
93
+ first, second = bigram
94
+ new_word = []
95
+ i = 0
96
+ while i < len(word):
97
+ try:
98
+ j = word.index(first, i)
99
+ new_word.extend(word[i:j])
100
+ i = j
101
+ except:
102
+ new_word.extend(word[i:])
103
+ break
104
+
105
+ if word[i] == first and i < len(word)-1 and word[i+1] == second:
106
+ new_word.append(first+second)
107
+ i += 2
108
+ else:
109
+ new_word.append(word[i])
110
+ i += 1
111
+ new_word = tuple(new_word)
112
+ word = new_word
113
+ if len(word) == 1:
114
+ break
115
+ else:
116
+ pairs = get_pairs(word)
117
+ word = ' '.join(word)
118
+ self.cache[token] = word
119
+ return word
120
+
121
+ def encode(self, text):
122
+ bpe_tokens = []
123
+ text = whitespace_clean(basic_clean(text)).lower()
124
+ for token in re.findall(self.pat, text):
125
+ token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
126
+ bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
127
+ return bpe_tokens
128
+
129
+ def decode(self, tokens):
130
+ text = ''.join([self.decoder[token] for token in tokens])
131
+ text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
132
+ return text
CLIP_/example.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import clip
3
+ from PIL import Image
4
+ import numpy as np
5
+ import cv2
6
+ import matplotlib.pyplot as plt
7
+
8
+ def interpret(image, text, model, device, index=None):
9
+ logits_per_image, logits_per_text = model(image, text)
10
+ probs = logits_per_image.softmax(dim=-1).detach().cpu().numpy()
11
+ if index is None:
12
+ index = np.argmax(logits_per_image.cpu().data.numpy(), axis=-1)
13
+ one_hot = np.zeros((1, logits_per_image.size()[-1]), dtype=np.float32)
14
+ one_hot[0, index] = 1
15
+ one_hot = torch.from_numpy(one_hot).requires_grad_(True)
16
+ one_hot = torch.sum(one_hot.cuda() * logits_per_image)
17
+ model.zero_grad()
18
+ one_hot.backward(retain_graph=True)
19
+
20
+ image_attn_blocks = list(dict(model.visual.transformer.resblocks.named_children()).values())
21
+ num_tokens = image_attn_blocks[0].attn_probs.shape[-1]
22
+ R = torch.eye(num_tokens, num_tokens, dtype=image_attn_blocks[0].attn_probs.dtype).to(device)
23
+ for blk in image_attn_blocks:
24
+ grad = blk.attn_grad
25
+ cam = blk.attn_probs
26
+ cam = cam.reshape(-1, cam.shape[-1], cam.shape[-1])
27
+ grad = grad.reshape(-1, grad.shape[-1], grad.shape[-1])
28
+ cam = grad * cam
29
+ cam = cam.clamp(min=0).mean(dim=0)
30
+ R += torch.matmul(cam, R)
31
+ R[0, 0] = 0
32
+ image_relevance = R[0, 1:]
33
+
34
+ # create heatmap from mask on image
35
+ def show_cam_on_image(img, mask):
36
+ heatmap = cv2.applyColorMap(np.uint8(255 * mask), cv2.COLORMAP_JET)
37
+ heatmap = np.float32(heatmap) / 255
38
+ cam = heatmap + np.float32(img)
39
+ cam = cam / np.max(cam)
40
+ return cam
41
+
42
+ image_relevance = image_relevance.reshape(1, 1, 7, 7)
43
+ image_relevance = torch.nn.functional.interpolate(image_relevance, size=224, mode='bilinear')
44
+ image_relevance = image_relevance.reshape(224, 224).cuda().data.cpu().numpy()
45
+ image_relevance = (image_relevance - image_relevance.min()) / (image_relevance.max() - image_relevance.min())
46
+ image = image[0].permute(1, 2, 0).data.cpu().numpy()
47
+ image = (image - image.min()) / (image.max() - image.min())
48
+ vis = show_cam_on_image(image, image_relevance)
49
+ vis = np.uint8(255 * vis)
50
+ vis = cv2.cvtColor(np.array(vis), cv2.COLOR_RGB2BGR)
51
+
52
+ plt.imshow(vis)
53
+ plt.show()
54
+
55
+ print("Label probs:", probs)
56
+
57
+ def main():
58
+ device = "cuda" if torch.cuda.is_available() else "cpu"
59
+ model, preprocess = clip.load("ViT-B/32", device=device, jit=False)
60
+
61
+ image = preprocess(Image.open("catdog.png")).unsqueeze(0).to(device)
62
+ text = clip.tokenize(["a dog", "a cat"]).to(device)
63
+ interpret(model=model, image=image, text=text, device=device, index=0)
64
+ interpret(model=model, image=image, text=text, device=device, index=1)
65
+
66
+ image = preprocess(Image.open("el1.png")).unsqueeze(0).to(device)
67
+ text = clip.tokenize(["an elephant", "a zebra"]).to(device)
68
+ interpret(model=model, image=image, text=text, device=device, index=0)
69
+ interpret(model=model, image=image, text=text, device=device, index=1)
70
+
71
+ image = preprocess(Image.open("el2.png")).unsqueeze(0).to(device)
72
+ text = clip.tokenize(["an elephant", "a zebra"]).to(device)
73
+ interpret(model=model, image=image, text=text, device=device, index=0)
74
+ interpret(model=model, image=image, text=text, device=device, index=1)
75
+
76
+ image = preprocess(Image.open("el3.png")).unsqueeze(0).to(device)
77
+ text = clip.tokenize(["an elephant", "a zebra"]).to(device)
78
+ interpret(model=model, image=image, text=text, device=device, index=0)
79
+ interpret(model=model, image=image, text=text, device=device, index=1)
80
+
81
+ image = preprocess(Image.open("el4.png")).unsqueeze(0).to(device)
82
+ text = clip.tokenize(["an elephant", "a zebra"]).to(device)
83
+ interpret(model=model, image=image, text=text, device=device, index=0)
84
+ interpret(model=model, image=image, text=text, device=device, index=1)
85
+
86
+ image = preprocess(Image.open("dogbird.png")).unsqueeze(0).to(device)
87
+ text = clip.tokenize(["a basset hound", "a parrot"]).to(device)
88
+ interpret(model=model, image=image, text=text, device=device, index=0)
89
+ interpret(model=model, image=image, text=text, device=device, index=1)
90
+
91
+
92
+ if __name__ == "__main__":
93
+ main()
94
+
CLIP_/model-card.md ADDED
@@ -0,0 +1,120 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card: CLIP
2
+
3
+ Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
4
+
5
+ ## Model Details
6
+
7
+ The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
8
+
9
+ ### Model Date
10
+
11
+ January 2021
12
+
13
+ ### Model Type
14
+
15
+ The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
16
+
17
+ ### Model Version
18
+
19
+ Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50.
20
+
21
+ As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule.
22
+
23
+ Please see the paper linked below for further details about their specification.
24
+
25
+ ### Documents
26
+
27
+ - [Blog Post](https://openai.com/blog/clip/)
28
+ - [CLIP Paper](https://arxiv.org/abs/2103.00020)
29
+
30
+
31
+
32
+ ## Model Use
33
+
34
+ ### Intended Use
35
+
36
+ The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
37
+
38
+ #### Primary intended uses
39
+
40
+ The primary intended users of these models are AI researchers.
41
+
42
+ We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
43
+
44
+ ### Out-of-Scope Use Cases
45
+
46
+ **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
47
+
48
+ Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
49
+
50
+ Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
51
+
52
+
53
+
54
+ ## Data
55
+
56
+ The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
57
+
58
+ ### Data Mission Statement
59
+
60
+ Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
61
+
62
+
63
+
64
+ ## Performance and Limitations
65
+
66
+ ### Performance
67
+
68
+ We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
69
+
70
+ - Food101
71
+ - CIFAR10
72
+ - CIFAR100
73
+ - Birdsnap
74
+ - SUN397
75
+ - Stanford Cars
76
+ - FGVC Aircraft
77
+ - VOC2007
78
+ - DTD
79
+ - Oxford-IIIT Pet dataset
80
+ - Caltech101
81
+ - Flowers102
82
+ - MNIST
83
+ - SVHN
84
+ - IIIT5K
85
+ - Hateful Memes
86
+ - SST-2
87
+ - UCF101
88
+ - Kinetics700
89
+ - Country211
90
+ - CLEVR Counting
91
+ - KITTI Distance
92
+ - STL-10
93
+ - RareAct
94
+ - Flickr30
95
+ - MSCOCO
96
+ - ImageNet
97
+ - ImageNet-A
98
+ - ImageNet-R
99
+ - ImageNet Sketch
100
+ - ObjectNet (ImageNet Overlap)
101
+ - Youtube-BB
102
+ - ImageNet-Vid
103
+
104
+ ## Limitations
105
+
106
+ CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
107
+
108
+ ### Bias and Fairness
109
+
110
+ We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
111
+
112
+ We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
113
+
114
+
115
+
116
+ ## Feedback
117
+
118
+ ### Where to send questions or comments about the model
119
+
120
+ Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
CLIP_/notebooks/Interacting_with_CLIP.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
CLIP_/notebooks/Prompt_Engineering_for_ImageNet.ipynb ADDED
@@ -0,0 +1,1188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "nbformat": 4,
3
+ "nbformat_minor": 0,
4
+ "metadata": {
5
+ "colab": {
6
+ "name": "Prompt Engineering for ImageNet.ipynb",
7
+ "provenance": [],
8
+ "collapsed_sections": []
9
+ },
10
+ "kernelspec": {
11
+ "name": "python3",
12
+ "display_name": "Python 3"
13
+ },
14
+ "accelerator": "GPU",
15
+ "widgets": {
16
+ "application/vnd.jupyter.widget-state+json": {
17
+ "4e3a3f83649f45f8bef3434980634664": {
18
+ "model_module": "@jupyter-widgets/controls",
19
+ "model_name": "HBoxModel",
20
+ "state": {
21
+ "_view_name": "HBoxView",
22
+ "_dom_classes": [],
23
+ "_model_name": "HBoxModel",
24
+ "_view_module": "@jupyter-widgets/controls",
25
+ "_model_module_version": "1.5.0",
26
+ "_view_count": null,
27
+ "_view_module_version": "1.5.0",
28
+ "box_style": "",
29
+ "layout": "IPY_MODEL_f066bdb766664c788ba1e9de8d311e22",
30
+ "_model_module": "@jupyter-widgets/controls",
31
+ "children": [
32
+ "IPY_MODEL_4e7a7427d28a4ae684e0be4548eb9944",
33
+ "IPY_MODEL_cc9dc019c1334a46b2558ffa6c0dd6e6"
34
+ ]
35
+ }
36
+ },
37
+ "f066bdb766664c788ba1e9de8d311e22": {
38
+ "model_module": "@jupyter-widgets/base",
39
+ "model_name": "LayoutModel",
40
+ "state": {
41
+ "_view_name": "LayoutView",
42
+ "grid_template_rows": null,
43
+ "right": null,
44
+ "justify_content": null,
45
+ "_view_module": "@jupyter-widgets/base",
46
+ "overflow": null,
47
+ "_model_module_version": "1.2.0",
48
+ "_view_count": null,
49
+ "flex_flow": null,
50
+ "width": null,
51
+ "min_width": null,
52
+ "border": null,
53
+ "align_items": null,
54
+ "bottom": null,
55
+ "_model_module": "@jupyter-widgets/base",
56
+ "top": null,
57
+ "grid_column": null,
58
+ "overflow_y": null,
59
+ "overflow_x": null,
60
+ "grid_auto_flow": null,
61
+ "grid_area": null,
62
+ "grid_template_columns": null,
63
+ "flex": null,
64
+ "_model_name": "LayoutModel",
65
+ "justify_items": null,
66
+ "grid_row": null,
67
+ "max_height": null,
68
+ "align_content": null,
69
+ "visibility": null,
70
+ "align_self": null,
71
+ "height": null,
72
+ "min_height": null,
73
+ "padding": null,
74
+ "grid_auto_rows": null,
75
+ "grid_gap": null,
76
+ "max_width": null,
77
+ "order": null,
78
+ "_view_module_version": "1.2.0",
79
+ "grid_template_areas": null,
80
+ "object_position": null,
81
+ "object_fit": null,
82
+ "grid_auto_columns": null,
83
+ "margin": null,
84
+ "display": null,
85
+ "left": null
86
+ }
87
+ },
88
+ "4e7a7427d28a4ae684e0be4548eb9944": {
89
+ "model_module": "@jupyter-widgets/controls",
90
+ "model_name": "FloatProgressModel",
91
+ "state": {
92
+ "_view_name": "ProgressView",
93
+ "style": "IPY_MODEL_285c877d4f644f3a8a58c4eb5948101c",
94
+ "_dom_classes": [],
95
+ "description": "100%",
96
+ "_model_name": "FloatProgressModel",
97
+ "bar_style": "success",
98
+ "max": 1000,
99
+ "_view_module": "@jupyter-widgets/controls",
100
+ "_model_module_version": "1.5.0",
101
+ "value": 1000,
102
+ "_view_count": null,
103
+ "_view_module_version": "1.5.0",
104
+ "orientation": "horizontal",
105
+ "min": 0,
106
+ "description_tooltip": null,
107
+ "_model_module": "@jupyter-widgets/controls",
108
+ "layout": "IPY_MODEL_075d6545e02e419ca565589eb5ffc318"
109
+ }
110
+ },
111
+ "cc9dc019c1334a46b2558ffa6c0dd6e6": {
112
+ "model_module": "@jupyter-widgets/controls",
113
+ "model_name": "HTMLModel",
114
+ "state": {
115
+ "_view_name": "HTMLView",
116
+ "style": "IPY_MODEL_53f9106c80e84d5b8c3ec96162d1db98",
117
+ "_dom_classes": [],
118
+ "description": "",
119
+ "_model_name": "HTMLModel",
120
+ "placeholder": "​",
121
+ "_view_module": "@jupyter-widgets/controls",
122
+ "_model_module_version": "1.5.0",
123
+ "value": " 1000/1000 [01:09&lt;00:00, 14.35it/s]",
124
+ "_view_count": null,
125
+ "_view_module_version": "1.5.0",
126
+ "description_tooltip": null,
127
+ "_model_module": "@jupyter-widgets/controls",
128
+ "layout": "IPY_MODEL_19c57d99e7c44cbda508ce558fde435d"
129
+ }
130
+ },
131
+ "285c877d4f644f3a8a58c4eb5948101c": {
132
+ "model_module": "@jupyter-widgets/controls",
133
+ "model_name": "ProgressStyleModel",
134
+ "state": {
135
+ "_view_name": "StyleView",
136
+ "_model_name": "ProgressStyleModel",
137
+ "description_width": "initial",
138
+ "_view_module": "@jupyter-widgets/base",
139
+ "_model_module_version": "1.5.0",
140
+ "_view_count": null,
141
+ "_view_module_version": "1.2.0",
142
+ "bar_color": null,
143
+ "_model_module": "@jupyter-widgets/controls"
144
+ }
145
+ },
146
+ "075d6545e02e419ca565589eb5ffc318": {
147
+ "model_module": "@jupyter-widgets/base",
148
+ "model_name": "LayoutModel",
149
+ "state": {
150
+ "_view_name": "LayoutView",
151
+ "grid_template_rows": null,
152
+ "right": null,
153
+ "justify_content": null,
154
+ "_view_module": "@jupyter-widgets/base",
155
+ "overflow": null,
156
+ "_model_module_version": "1.2.0",
157
+ "_view_count": null,
158
+ "flex_flow": null,
159
+ "width": null,
160
+ "min_width": null,
161
+ "border": null,
162
+ "align_items": null,
163
+ "bottom": null,
164
+ "_model_module": "@jupyter-widgets/base",
165
+ "top": null,
166
+ "grid_column": null,
167
+ "overflow_y": null,
168
+ "overflow_x": null,
169
+ "grid_auto_flow": null,
170
+ "grid_area": null,
171
+ "grid_template_columns": null,
172
+ "flex": null,
173
+ "_model_name": "LayoutModel",
174
+ "justify_items": null,
175
+ "grid_row": null,
176
+ "max_height": null,
177
+ "align_content": null,
178
+ "visibility": null,
179
+ "align_self": null,
180
+ "height": null,
181
+ "min_height": null,
182
+ "padding": null,
183
+ "grid_auto_rows": null,
184
+ "grid_gap": null,
185
+ "max_width": null,
186
+ "order": null,
187
+ "_view_module_version": "1.2.0",
188
+ "grid_template_areas": null,
189
+ "object_position": null,
190
+ "object_fit": null,
191
+ "grid_auto_columns": null,
192
+ "margin": null,
193
+ "display": null,
194
+ "left": null
195
+ }
196
+ },
197
+ "53f9106c80e84d5b8c3ec96162d1db98": {
198
+ "model_module": "@jupyter-widgets/controls",
199
+ "model_name": "DescriptionStyleModel",
200
+ "state": {
201
+ "_view_name": "StyleView",
202
+ "_model_name": "DescriptionStyleModel",
203
+ "description_width": "",
204
+ "_view_module": "@jupyter-widgets/base",
205
+ "_model_module_version": "1.5.0",
206
+ "_view_count": null,
207
+ "_view_module_version": "1.2.0",
208
+ "_model_module": "@jupyter-widgets/controls"
209
+ }
210
+ },
211
+ "19c57d99e7c44cbda508ce558fde435d": {
212
+ "model_module": "@jupyter-widgets/base",
213
+ "model_name": "LayoutModel",
214
+ "state": {
215
+ "_view_name": "LayoutView",
216
+ "grid_template_rows": null,
217
+ "right": null,
218
+ "justify_content": null,
219
+ "_view_module": "@jupyter-widgets/base",
220
+ "overflow": null,
221
+ "_model_module_version": "1.2.0",
222
+ "_view_count": null,
223
+ "flex_flow": null,
224
+ "width": null,
225
+ "min_width": null,
226
+ "border": null,
227
+ "align_items": null,
228
+ "bottom": null,
229
+ "_model_module": "@jupyter-widgets/base",
230
+ "top": null,
231
+ "grid_column": null,
232
+ "overflow_y": null,
233
+ "overflow_x": null,
234
+ "grid_auto_flow": null,
235
+ "grid_area": null,
236
+ "grid_template_columns": null,
237
+ "flex": null,
238
+ "_model_name": "LayoutModel",
239
+ "justify_items": null,
240
+ "grid_row": null,
241
+ "max_height": null,
242
+ "align_content": null,
243
+ "visibility": null,
244
+ "align_self": null,
245
+ "height": null,
246
+ "min_height": null,
247
+ "padding": null,
248
+ "grid_auto_rows": null,
249
+ "grid_gap": null,
250
+ "max_width": null,
251
+ "order": null,
252
+ "_view_module_version": "1.2.0",
253
+ "grid_template_areas": null,
254
+ "object_position": null,
255
+ "object_fit": null,
256
+ "grid_auto_columns": null,
257
+ "margin": null,
258
+ "display": null,
259
+ "left": null
260
+ }
261
+ },
262
+ "fbb2b937b22049f5987f39f48c652a86": {
263
+ "model_module": "@jupyter-widgets/controls",
264
+ "model_name": "HBoxModel",
265
+ "state": {
266
+ "_view_name": "HBoxView",
267
+ "_dom_classes": [],
268
+ "_model_name": "HBoxModel",
269
+ "_view_module": "@jupyter-widgets/controls",
270
+ "_model_module_version": "1.5.0",
271
+ "_view_count": null,
272
+ "_view_module_version": "1.5.0",
273
+ "box_style": "",
274
+ "layout": "IPY_MODEL_0a1b6b76984349ccb36ca2fc4a4a0208",
275
+ "_model_module": "@jupyter-widgets/controls",
276
+ "children": [
277
+ "IPY_MODEL_c136afb47aa14ac2832093ee415c6f3e",
278
+ "IPY_MODEL_467a151e73744eccb199fe72aa352e5b"
279
+ ]
280
+ }
281
+ },
282
+ "0a1b6b76984349ccb36ca2fc4a4a0208": {
283
+ "model_module": "@jupyter-widgets/base",
284
+ "model_name": "LayoutModel",
285
+ "state": {
286
+ "_view_name": "LayoutView",
287
+ "grid_template_rows": null,
288
+ "right": null,
289
+ "justify_content": null,
290
+ "_view_module": "@jupyter-widgets/base",
291
+ "overflow": null,
292
+ "_model_module_version": "1.2.0",
293
+ "_view_count": null,
294
+ "flex_flow": null,
295
+ "width": null,
296
+ "min_width": null,
297
+ "border": null,
298
+ "align_items": null,
299
+ "bottom": null,
300
+ "_model_module": "@jupyter-widgets/base",
301
+ "top": null,
302
+ "grid_column": null,
303
+ "overflow_y": null,
304
+ "overflow_x": null,
305
+ "grid_auto_flow": null,
306
+ "grid_area": null,
307
+ "grid_template_columns": null,
308
+ "flex": null,
309
+ "_model_name": "LayoutModel",
310
+ "justify_items": null,
311
+ "grid_row": null,
312
+ "max_height": null,
313
+ "align_content": null,
314
+ "visibility": null,
315
+ "align_self": null,
316
+ "height": null,
317
+ "min_height": null,
318
+ "padding": null,
319
+ "grid_auto_rows": null,
320
+ "grid_gap": null,
321
+ "max_width": null,
322
+ "order": null,
323
+ "_view_module_version": "1.2.0",
324
+ "grid_template_areas": null,
325
+ "object_position": null,
326
+ "object_fit": null,
327
+ "grid_auto_columns": null,
328
+ "margin": null,
329
+ "display": null,
330
+ "left": null
331
+ }
332
+ },
333
+ "c136afb47aa14ac2832093ee415c6f3e": {
334
+ "model_module": "@jupyter-widgets/controls",
335
+ "model_name": "FloatProgressModel",
336
+ "state": {
337
+ "_view_name": "ProgressView",
338
+ "style": "IPY_MODEL_f6d637c3fc3c46928d023441227130e5",
339
+ "_dom_classes": [],
340
+ "description": "100%",
341
+ "_model_name": "FloatProgressModel",
342
+ "bar_style": "success",
343
+ "max": 313,
344
+ "_view_module": "@jupyter-widgets/controls",
345
+ "_model_module_version": "1.5.0",
346
+ "value": 313,
347
+ "_view_count": null,
348
+ "_view_module_version": "1.5.0",
349
+ "orientation": "horizontal",
350
+ "min": 0,
351
+ "description_tooltip": null,
352
+ "_model_module": "@jupyter-widgets/controls",
353
+ "layout": "IPY_MODEL_029e6eadacb8480193aab52ff073be8f"
354
+ }
355
+ },
356
+ "467a151e73744eccb199fe72aa352e5b": {
357
+ "model_module": "@jupyter-widgets/controls",
358
+ "model_name": "HTMLModel",
359
+ "state": {
360
+ "_view_name": "HTMLView",
361
+ "style": "IPY_MODEL_30178355f76742898d37966b3875ef0a",
362
+ "_dom_classes": [],
363
+ "description": "",
364
+ "_model_name": "HTMLModel",
365
+ "placeholder": "​",
366
+ "_view_module": "@jupyter-widgets/controls",
367
+ "_model_module_version": "1.5.0",
368
+ "value": " 313/313 [01:26&lt;00:00, 3.62it/s]",
369
+ "_view_count": null,
370
+ "_view_module_version": "1.5.0",
371
+ "description_tooltip": null,
372
+ "_model_module": "@jupyter-widgets/controls",
373
+ "layout": "IPY_MODEL_2e62544c03d64d6d92b94fcfaca2fc90"
374
+ }
375
+ },
376
+ "f6d637c3fc3c46928d023441227130e5": {
377
+ "model_module": "@jupyter-widgets/controls",
378
+ "model_name": "ProgressStyleModel",
379
+ "state": {
380
+ "_view_name": "StyleView",
381
+ "_model_name": "ProgressStyleModel",
382
+ "description_width": "initial",
383
+ "_view_module": "@jupyter-widgets/base",
384
+ "_model_module_version": "1.5.0",
385
+ "_view_count": null,
386
+ "_view_module_version": "1.2.0",
387
+ "bar_color": null,
388
+ "_model_module": "@jupyter-widgets/controls"
389
+ }
390
+ },
391
+ "029e6eadacb8480193aab52ff073be8f": {
392
+ "model_module": "@jupyter-widgets/base",
393
+ "model_name": "LayoutModel",
394
+ "state": {
395
+ "_view_name": "LayoutView",
396
+ "grid_template_rows": null,
397
+ "right": null,
398
+ "justify_content": null,
399
+ "_view_module": "@jupyter-widgets/base",
400
+ "overflow": null,
401
+ "_model_module_version": "1.2.0",
402
+ "_view_count": null,
403
+ "flex_flow": null,
404
+ "width": null,
405
+ "min_width": null,
406
+ "border": null,
407
+ "align_items": null,
408
+ "bottom": null,
409
+ "_model_module": "@jupyter-widgets/base",
410
+ "top": null,
411
+ "grid_column": null,
412
+ "overflow_y": null,
413
+ "overflow_x": null,
414
+ "grid_auto_flow": null,
415
+ "grid_area": null,
416
+ "grid_template_columns": null,
417
+ "flex": null,
418
+ "_model_name": "LayoutModel",
419
+ "justify_items": null,
420
+ "grid_row": null,
421
+ "max_height": null,
422
+ "align_content": null,
423
+ "visibility": null,
424
+ "align_self": null,
425
+ "height": null,
426
+ "min_height": null,
427
+ "padding": null,
428
+ "grid_auto_rows": null,
429
+ "grid_gap": null,
430
+ "max_width": null,
431
+ "order": null,
432
+ "_view_module_version": "1.2.0",
433
+ "grid_template_areas": null,
434
+ "object_position": null,
435
+ "object_fit": null,
436
+ "grid_auto_columns": null,
437
+ "margin": null,
438
+ "display": null,
439
+ "left": null
440
+ }
441
+ },
442
+ "30178355f76742898d37966b3875ef0a": {
443
+ "model_module": "@jupyter-widgets/controls",
444
+ "model_name": "DescriptionStyleModel",
445
+ "state": {
446
+ "_view_name": "StyleView",
447
+ "_model_name": "DescriptionStyleModel",
448
+ "description_width": "",
449
+ "_view_module": "@jupyter-widgets/base",
450
+ "_model_module_version": "1.5.0",
451
+ "_view_count": null,
452
+ "_view_module_version": "1.2.0",
453
+ "_model_module": "@jupyter-widgets/controls"
454
+ }
455
+ },
456
+ "2e62544c03d64d6d92b94fcfaca2fc90": {
457
+ "model_module": "@jupyter-widgets/base",
458
+ "model_name": "LayoutModel",
459
+ "state": {
460
+ "_view_name": "LayoutView",
461
+ "grid_template_rows": null,
462
+ "right": null,
463
+ "justify_content": null,
464
+ "_view_module": "@jupyter-widgets/base",
465
+ "overflow": null,
466
+ "_model_module_version": "1.2.0",
467
+ "_view_count": null,
468
+ "flex_flow": null,
469
+ "width": null,
470
+ "min_width": null,
471
+ "border": null,
472
+ "align_items": null,
473
+ "bottom": null,
474
+ "_model_module": "@jupyter-widgets/base",
475
+ "top": null,
476
+ "grid_column": null,
477
+ "overflow_y": null,
478
+ "overflow_x": null,
479
+ "grid_auto_flow": null,
480
+ "grid_area": null,
481
+ "grid_template_columns": null,
482
+ "flex": null,
483
+ "_model_name": "LayoutModel",
484
+ "justify_items": null,
485
+ "grid_row": null,
486
+ "max_height": null,
487
+ "align_content": null,
488
+ "visibility": null,
489
+ "align_self": null,
490
+ "height": null,
491
+ "min_height": null,
492
+ "padding": null,
493
+ "grid_auto_rows": null,
494
+ "grid_gap": null,
495
+ "max_width": null,
496
+ "order": null,
497
+ "_view_module_version": "1.2.0",
498
+ "grid_template_areas": null,
499
+ "object_position": null,
500
+ "object_fit": null,
501
+ "grid_auto_columns": null,
502
+ "margin": null,
503
+ "display": null,
504
+ "left": null
505
+ }
506
+ }
507
+ }
508
+ }
509
+ },
510
+ "cells": [
511
+ {
512
+ "cell_type": "markdown",
513
+ "metadata": {
514
+ "id": "53N4k0pj_9qL"
515
+ },
516
+ "source": [
517
+ "# Preparation for Colab\n",
518
+ "\n",
519
+ "Make sure you're running a GPU runtime; if not, select \"GPU\" as the hardware accelerator in Runtime > Change Runtime Type in the menu. The next cells will print the CUDA version of the runtime if it has a GPU, and install PyTorch 1.7.1."
520
+ ]
521
+ },
522
+ {
523
+ "cell_type": "code",
524
+ "metadata": {
525
+ "colab": {
526
+ "base_uri": "https://localhost:8080/"
527
+ },
528
+ "id": "0BpdJkdBssk9",
529
+ "outputId": "dc75b5f9-17c7-4856-ac79-8047fa609500"
530
+ },
531
+ "source": [
532
+ "import subprocess\n",
533
+ "\n",
534
+ "CUDA_version = [s for s in subprocess.check_output([\"nvcc\", \"--version\"]).decode(\"UTF-8\").split(\", \") if s.startswith(\"release\")][0].split(\" \")[-1]\n",
535
+ "print(\"CUDA version:\", CUDA_version)\n",
536
+ "\n",
537
+ "if CUDA_version == \"10.0\":\n",
538
+ " torch_version_suffix = \"+cu100\"\n",
539
+ "elif CUDA_version == \"10.1\":\n",
540
+ " torch_version_suffix = \"+cu101\"\n",
541
+ "elif CUDA_version == \"10.2\":\n",
542
+ " torch_version_suffix = \"\"\n",
543
+ "else:\n",
544
+ " torch_version_suffix = \"+cu110\""
545
+ ],
546
+ "execution_count": 1,
547
+ "outputs": [
548
+ {
549
+ "output_type": "stream",
550
+ "text": [
551
+ "CUDA version: 10.1\n"
552
+ ],
553
+ "name": "stdout"
554
+ }
555
+ ]
556
+ },
557
+ {
558
+ "cell_type": "code",
559
+ "metadata": {
560
+ "colab": {
561
+ "base_uri": "https://localhost:8080/"
562
+ },
563
+ "id": "RBVr18E5tse8",
564
+ "outputId": "404230c1-0f78-451d-8816-19d4109d579e"
565
+ },
566
+ "source": [
567
+ "! pip install torch==1.7.1{torch_version_suffix} torchvision==0.8.2{torch_version_suffix} -f https://download.pytorch.org/whl/torch_stable.html ftfy regex"
568
+ ],
569
+ "execution_count": 2,
570
+ "outputs": [
571
+ {
572
+ "output_type": "stream",
573
+ "text": [
574
+ "Looking in links: https://download.pytorch.org/whl/torch_stable.html\n",
575
+ "Collecting torch==1.7.1+cu101\n",
576
+ "\u001b[?25l Downloading https://download.pytorch.org/whl/cu101/torch-1.7.1%2Bcu101-cp36-cp36m-linux_x86_64.whl (735.4MB)\n",
577
+ "\u001b[K |████████████████████████████████| 735.4MB 25kB/s \n",
578
+ "\u001b[?25hCollecting torchvision==0.8.2+cu101\n",
579
+ "\u001b[?25l Downloading https://download.pytorch.org/whl/cu101/torchvision-0.8.2%2Bcu101-cp36-cp36m-linux_x86_64.whl (12.8MB)\n",
580
+ "\u001b[K |████████████████████████████████| 12.8MB 248kB/s \n",
581
+ "\u001b[?25hCollecting ftfy\n",
582
+ "\u001b[?25l Downloading https://files.pythonhosted.org/packages/ff/e2/3b51c53dffb1e52d9210ebc01f1fb9f2f6eba9b3201fa971fd3946643c71/ftfy-5.8.tar.gz (64kB)\n",
583
+ "\u001b[K |████████████████████████████████| 71kB 5.6MB/s \n",
584
+ "\u001b[?25hRequirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (2019.12.20)\n",
585
+ "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (3.7.4.3)\n",
586
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (1.19.5)\n",
587
+ "Requirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from torch==1.7.1+cu101) (0.8)\n",
588
+ "Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision==0.8.2+cu101) (7.0.0)\n",
589
+ "Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy) (0.2.5)\n",
590
+ "Building wheels for collected packages: ftfy\n",
591
+ " Building wheel for ftfy (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
592
+ " Created wheel for ftfy: filename=ftfy-5.8-cp36-none-any.whl size=45613 sha256=73a94b51b7fe03350783d5b9dd638801a904c618d3b0dc7237ce77f401f33404\n",
593
+ " Stored in directory: /root/.cache/pip/wheels/ba/c0/ef/f28c4da5ac84a4e06ac256ca9182fc34fa57fefffdbc68425b\n",
594
+ "Successfully built ftfy\n",
595
+ "Installing collected packages: torch, torchvision, ftfy\n",
596
+ " Found existing installation: torch 1.7.0+cu101\n",
597
+ " Uninstalling torch-1.7.0+cu101:\n",
598
+ " Successfully uninstalled torch-1.7.0+cu101\n",
599
+ " Found existing installation: torchvision 0.8.1+cu101\n",
600
+ " Uninstalling torchvision-0.8.1+cu101:\n",
601
+ " Successfully uninstalled torchvision-0.8.1+cu101\n",
602
+ "Successfully installed ftfy-5.8 torch-1.7.1+cu101 torchvision-0.8.2+cu101\n"
603
+ ],
604
+ "name": "stdout"
605
+ }
606
+ ]
607
+ },
608
+ {
609
+ "cell_type": "markdown",
610
+ "metadata": {
611
+ "id": "zGm7TwfbDLgu"
612
+ },
613
+ "source": [
614
+ "The following command installs the `clip` module from its source:"
615
+ ]
616
+ },
617
+ {
618
+ "cell_type": "code",
619
+ "metadata": {
620
+ "colab": {
621
+ "base_uri": "https://localhost:8080/"
622
+ },
623
+ "id": "QAFjXlGdEMQM",
624
+ "outputId": "859da71b-00c8-44d1-84d0-7965c20411b4"
625
+ },
626
+ "source": [
627
+ "! pip install git+https://github.com/openai/CLIP.git"
628
+ ],
629
+ "execution_count": 3,
630
+ "outputs": [
631
+ {
632
+ "output_type": "stream",
633
+ "text": [
634
+ "Collecting git+https://github.com/openai/CLIP.git\n",
635
+ " Cloning https://github.com/openai/CLIP.git to /tmp/pip-req-build-ewapt31c\n",
636
+ " Running command git clone -q https://github.com/openai/CLIP.git /tmp/pip-req-build-ewapt31c\n",
637
+ "Requirement already satisfied: ftfy in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (5.8)\n",
638
+ "Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (2019.12.20)\n",
639
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (4.41.1)\n",
640
+ "Requirement already satisfied: torch~=1.7.1 in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (1.7.1+cu101)\n",
641
+ "Requirement already satisfied: torchvision~=0.8.2 in /usr/local/lib/python3.6/dist-packages (from clip==1.0) (0.8.2+cu101)\n",
642
+ "Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/dist-packages (from ftfy->clip==1.0) (0.2.5)\n",
643
+ "Requirement already satisfied: dataclasses; python_version < \"3.7\" in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (0.8)\n",
644
+ "Requirement already satisfied: typing-extensions in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (3.7.4.3)\n",
645
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from torch~=1.7.1->clip==1.0) (1.19.5)\n",
646
+ "Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.6/dist-packages (from torchvision~=0.8.2->clip==1.0) (7.0.0)\n",
647
+ "Building wheels for collected packages: clip\n",
648
+ " Building wheel for clip (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
649
+ " Created wheel for clip: filename=clip-1.0-cp36-none-any.whl size=1367993 sha256=1839a2f0b015f75579b578ebfa15bcbe8ebab1ff535127c9357c5b26f8473de3\n",
650
+ " Stored in directory: /tmp/pip-ephem-wheel-cache-jwymwzm4/wheels/79/51/d7/69f91d37121befe21d9c52332e04f592e17d1cabc7319b3e09\n",
651
+ "Successfully built clip\n",
652
+ "Installing collected packages: clip\n",
653
+ "Successfully installed clip-1.0\n"
654
+ ],
655
+ "name": "stdout"
656
+ }
657
+ ]
658
+ },
659
+ {
660
+ "cell_type": "code",
661
+ "metadata": {
662
+ "id": "C1hkDT38hSaP",
663
+ "colab": {
664
+ "base_uri": "https://localhost:8080/"
665
+ },
666
+ "outputId": "6cd33e12-aed4-4950-e32f-6f1113eb3ade"
667
+ },
668
+ "source": [
669
+ "import numpy as np\n",
670
+ "import torch\n",
671
+ "import clip\n",
672
+ "from tqdm.notebook import tqdm\n",
673
+ "\n",
674
+ "print(\"Torch version:\", torch.__version__)"
675
+ ],
676
+ "execution_count": 4,
677
+ "outputs": [
678
+ {
679
+ "output_type": "stream",
680
+ "text": [
681
+ "Torch version: 1.7.1+cu101\n"
682
+ ],
683
+ "name": "stdout"
684
+ }
685
+ ]
686
+ },
687
+ {
688
+ "cell_type": "markdown",
689
+ "metadata": {
690
+ "id": "eFxgLV5HAEEw"
691
+ },
692
+ "source": [
693
+ "# Loading the model\n",
694
+ "\n",
695
+ "Download and instantiate a CLIP model using the `clip` module that we just installed."
696
+ ]
697
+ },
698
+ {
699
+ "cell_type": "code",
700
+ "metadata": {
701
+ "id": "uLFS29hnhlY4",
702
+ "colab": {
703
+ "base_uri": "https://localhost:8080/"
704
+ },
705
+ "outputId": "3148f942-0226-42a3-e5d8-4b9bc6c7c4f8"
706
+ },
707
+ "source": [
708
+ "clip.available_models()"
709
+ ],
710
+ "execution_count": 5,
711
+ "outputs": [
712
+ {
713
+ "output_type": "execute_result",
714
+ "data": {
715
+ "text/plain": [
716
+ "['RN50', 'ViT-B/32']"
717
+ ]
718
+ },
719
+ "metadata": {
720
+ "tags": []
721
+ },
722
+ "execution_count": 5
723
+ }
724
+ ]
725
+ },
726
+ {
727
+ "cell_type": "code",
728
+ "metadata": {
729
+ "id": "cboKZocQlSYX",
730
+ "colab": {
731
+ "base_uri": "https://localhost:8080/"
732
+ },
733
+ "outputId": "58e644d4-6e23-43b5-964e-1e9e8540d22e"
734
+ },
735
+ "source": [
736
+ "model, preprocess = clip.load(\"ViT-B/32\")"
737
+ ],
738
+ "execution_count": 6,
739
+ "outputs": [
740
+ {
741
+ "output_type": "stream",
742
+ "text": [
743
+ "100%|██████████████████████| 353976522/353976522 [00:01<00:00, 188872424.30it/s]\n"
744
+ ],
745
+ "name": "stderr"
746
+ }
747
+ ]
748
+ },
749
+ {
750
+ "cell_type": "code",
751
+ "metadata": {
752
+ "colab": {
753
+ "base_uri": "https://localhost:8080/"
754
+ },
755
+ "id": "IBRVTY9lbGm8",
756
+ "outputId": "58641dc2-919d-40ae-b71a-7b7b47830f77"
757
+ },
758
+ "source": [
759
+ "input_resolution = model.input_resolution.item()\n",
760
+ "context_length = model.context_length.item()\n",
761
+ "vocab_size = model.vocab_size.item()\n",
762
+ "\n",
763
+ "print(\"Model parameters:\", f\"{np.sum([int(np.prod(p.shape)) for p in model.parameters()]):,}\")\n",
764
+ "print(\"Input resolution:\", input_resolution)\n",
765
+ "print(\"Context length:\", context_length)\n",
766
+ "print(\"Vocab size:\", vocab_size)"
767
+ ],
768
+ "execution_count": 7,
769
+ "outputs": [
770
+ {
771
+ "output_type": "stream",
772
+ "text": [
773
+ "Model parameters: 151,277,313\n",
774
+ "Input resolution: 224\n",
775
+ "Context length: 77\n",
776
+ "Vocab size: 49408\n"
777
+ ],
778
+ "name": "stdout"
779
+ }
780
+ ]
781
+ },
782
+ {
783
+ "cell_type": "markdown",
784
+ "metadata": {
785
+ "id": "LhO3OtOmF8M4"
786
+ },
787
+ "source": [
788
+ "# Preparing ImageNet labels and prompts\n",
789
+ "\n",
790
+ "The following cell contains the 1,000 labels for the ImageNet dataset, followed by the text templates we'll use as \"prompt engineering\"."
791
+ ]
792
+ },
793
+ {
794
+ "cell_type": "code",
795
+ "metadata": {
796
+ "id": "R2HbOZrqa0jF"
797
+ },
798
+ "source": [
799
+ "imagenet_classes = [\"tench\", \"goldfish\", \"great white shark\", \"tiger shark\", \"hammerhead shark\", \"electric ray\", \"stingray\", \"rooster\", \"hen\", \"ostrich\", \"brambling\", \"goldfinch\", \"house finch\", \"junco\", \"indigo bunting\", \"American robin\", \"bulbul\", \"jay\", \"magpie\", \"chickadee\", \"American dipper\", \"kite (bird of prey)\", \"bald eagle\", \"vulture\", \"great grey owl\", \"fire salamander\", \"smooth newt\", \"newt\", \"spotted salamander\", \"axolotl\", \"American bullfrog\", \"tree frog\", \"tailed frog\", \"loggerhead sea turtle\", \"leatherback sea turtle\", \"mud turtle\", \"terrapin\", \"box turtle\", \"banded gecko\", \"green iguana\", \"Carolina anole\", \"desert grassland whiptail lizard\", \"agama\", \"frilled-necked lizard\", \"alligator lizard\", \"Gila monster\", \"European green lizard\", \"chameleon\", \"Komodo dragon\", \"Nile crocodile\", \"American alligator\", \"triceratops\", \"worm snake\", \"ring-necked snake\", \"eastern hog-nosed snake\", \"smooth green snake\", \"kingsnake\", \"garter snake\", \"water snake\", \"vine snake\", \"night snake\", \"boa constrictor\", \"African rock python\", \"Indian cobra\", \"green mamba\", \"sea snake\", \"Saharan horned viper\", \"eastern diamondback rattlesnake\", \"sidewinder rattlesnake\", \"trilobite\", \"harvestman\", \"scorpion\", \"yellow garden spider\", \"barn spider\", \"European garden spider\", \"southern black widow\", \"tarantula\", \"wolf spider\", \"tick\", \"centipede\", \"black grouse\", \"ptarmigan\", \"ruffed grouse\", \"prairie grouse\", \"peafowl\", \"quail\", \"partridge\", \"african grey parrot\", \"macaw\", \"sulphur-crested cockatoo\", \"lorikeet\", \"coucal\", \"bee eater\", \"hornbill\", \"hummingbird\", \"jacamar\", \"toucan\", \"duck\", \"red-breasted merganser\", \"goose\", \"black swan\", \"tusker\", \"echidna\", \"platypus\", \"wallaby\", \"koala\", \"wombat\", \"jellyfish\", \"sea anemone\", \"brain coral\", \"flatworm\", \"nematode\", \"conch\", \"snail\", \"slug\", \"sea slug\", \"chiton\", \"chambered nautilus\", \"Dungeness crab\", \"rock crab\", \"fiddler crab\", \"red king crab\", \"American lobster\", \"spiny lobster\", \"crayfish\", \"hermit crab\", \"isopod\", \"white stork\", \"black stork\", \"spoonbill\", \"flamingo\", \"little blue heron\", \"great egret\", \"bittern bird\", \"crane bird\", \"limpkin\", \"common gallinule\", \"American coot\", \"bustard\", \"ruddy turnstone\", \"dunlin\", \"common redshank\", \"dowitcher\", \"oystercatcher\", \"pelican\", \"king penguin\", \"albatross\", \"grey whale\", \"killer whale\", \"dugong\", \"sea lion\", \"Chihuahua\", \"Japanese Chin\", \"Maltese\", \"Pekingese\", \"Shih Tzu\", \"King Charles Spaniel\", \"Papillon\", \"toy terrier\", \"Rhodesian Ridgeback\", \"Afghan Hound\", \"Basset Hound\", \"Beagle\", \"Bloodhound\", \"Bluetick Coonhound\", \"Black and Tan Coonhound\", \"Treeing Walker Coonhound\", \"English foxhound\", \"Redbone Coonhound\", \"borzoi\", \"Irish Wolfhound\", \"Italian Greyhound\", \"Whippet\", \"Ibizan Hound\", \"Norwegian Elkhound\", \"Otterhound\", \"Saluki\", \"Scottish Deerhound\", \"Weimaraner\", \"Staffordshire Bull Terrier\", \"American Staffordshire Terrier\", \"Bedlington Terrier\", \"Border Terrier\", \"Kerry Blue Terrier\", \"Irish Terrier\", \"Norfolk Terrier\", \"Norwich Terrier\", \"Yorkshire Terrier\", \"Wire Fox Terrier\", \"Lakeland Terrier\", \"Sealyham Terrier\", \"Airedale Terrier\", \"Cairn Terrier\", \"Australian Terrier\", \"Dandie Dinmont Terrier\", \"Boston Terrier\", \"Miniature Schnauzer\", \"Giant Schnauzer\", \"Standard Schnauzer\", \"Scottish Terrier\", \"Tibetan Terrier\", \"Australian Silky Terrier\", \"Soft-coated Wheaten Terrier\", \"West Highland White Terrier\", \"Lhasa Apso\", \"Flat-Coated Retriever\", \"Curly-coated Retriever\", \"Golden Retriever\", \"Labrador Retriever\", \"Chesapeake Bay Retriever\", \"German Shorthaired Pointer\", \"Vizsla\", \"English Setter\", \"Irish Setter\", \"Gordon Setter\", \"Brittany dog\", \"Clumber Spaniel\", \"English Springer Spaniel\", \"Welsh Springer Spaniel\", \"Cocker Spaniel\", \"Sussex Spaniel\", \"Irish Water Spaniel\", \"Kuvasz\", \"Schipperke\", \"Groenendael dog\", \"Malinois\", \"Briard\", \"Australian Kelpie\", \"Komondor\", \"Old English Sheepdog\", \"Shetland Sheepdog\", \"collie\", \"Border Collie\", \"Bouvier des Flandres dog\", \"Rottweiler\", \"German Shepherd Dog\", \"Dobermann\", \"Miniature Pinscher\", \"Greater Swiss Mountain Dog\", \"Bernese Mountain Dog\", \"Appenzeller Sennenhund\", \"Entlebucher Sennenhund\", \"Boxer\", \"Bullmastiff\", \"Tibetan Mastiff\", \"French Bulldog\", \"Great Dane\", \"St. Bernard\", \"husky\", \"Alaskan Malamute\", \"Siberian Husky\", \"Dalmatian\", \"Affenpinscher\", \"Basenji\", \"pug\", \"Leonberger\", \"Newfoundland dog\", \"Great Pyrenees dog\", \"Samoyed\", \"Pomeranian\", \"Chow Chow\", \"Keeshond\", \"brussels griffon\", \"Pembroke Welsh Corgi\", \"Cardigan Welsh Corgi\", \"Toy Poodle\", \"Miniature Poodle\", \"Standard Poodle\", \"Mexican hairless dog (xoloitzcuintli)\", \"grey wolf\", \"Alaskan tundra wolf\", \"red wolf or maned wolf\", \"coyote\", \"dingo\", \"dhole\", \"African wild dog\", \"hyena\", \"red fox\", \"kit fox\", \"Arctic fox\", \"grey fox\", \"tabby cat\", \"tiger cat\", \"Persian cat\", \"Siamese cat\", \"Egyptian Mau\", \"cougar\", \"lynx\", \"leopard\", \"snow leopard\", \"jaguar\", \"lion\", \"tiger\", \"cheetah\", \"brown bear\", \"American black bear\", \"polar bear\", \"sloth bear\", \"mongoose\", \"meerkat\", \"tiger beetle\", \"ladybug\", \"ground beetle\", \"longhorn beetle\", \"leaf beetle\", \"dung beetle\", \"rhinoceros beetle\", \"weevil\", \"fly\", \"bee\", \"ant\", \"grasshopper\", \"cricket insect\", \"stick insect\", \"cockroach\", \"praying mantis\", \"cicada\", \"leafhopper\", \"lacewing\", \"dragonfly\", \"damselfly\", \"red admiral butterfly\", \"ringlet butterfly\", \"monarch butterfly\", \"small white butterfly\", \"sulphur butterfly\", \"gossamer-winged butterfly\", \"starfish\", \"sea urchin\", \"sea cucumber\", \"cottontail rabbit\", \"hare\", \"Angora rabbit\", \"hamster\", \"porcupine\", \"fox squirrel\", \"marmot\", \"beaver\", \"guinea pig\", \"common sorrel horse\", \"zebra\", \"pig\", \"wild boar\", \"warthog\", \"hippopotamus\", \"ox\", \"water buffalo\", \"bison\", \"ram (adult male sheep)\", \"bighorn sheep\", \"Alpine ibex\", \"hartebeest\", \"impala (antelope)\", \"gazelle\", \"arabian camel\", \"llama\", \"weasel\", \"mink\", \"European polecat\", \"black-footed ferret\", \"otter\", \"skunk\", \"badger\", \"armadillo\", \"three-toed sloth\", \"orangutan\", \"gorilla\", \"chimpanzee\", \"gibbon\", \"siamang\", \"guenon\", \"patas monkey\", \"baboon\", \"macaque\", \"langur\", \"black-and-white colobus\", \"proboscis monkey\", \"marmoset\", \"white-headed capuchin\", \"howler monkey\", \"titi monkey\", \"Geoffroy's spider monkey\", \"common squirrel monkey\", \"ring-tailed lemur\", \"indri\", \"Asian elephant\", \"African bush elephant\", \"red panda\", \"giant panda\", \"snoek fish\", \"eel\", \"silver salmon\", \"rock beauty fish\", \"clownfish\", \"sturgeon\", \"gar fish\", \"lionfish\", \"pufferfish\", \"abacus\", \"abaya\", \"academic gown\", \"accordion\", \"acoustic guitar\", \"aircraft carrier\", \"airliner\", \"airship\", \"altar\", \"ambulance\", \"amphibious vehicle\", \"analog clock\", \"apiary\", \"apron\", \"trash can\", \"assault rifle\", \"backpack\", \"bakery\", \"balance beam\", \"balloon\", \"ballpoint pen\", \"Band-Aid\", \"banjo\", \"baluster / handrail\", \"barbell\", \"barber chair\", \"barbershop\", \"barn\", \"barometer\", \"barrel\", \"wheelbarrow\", \"baseball\", \"basketball\", \"bassinet\", \"bassoon\", \"swimming cap\", \"bath towel\", \"bathtub\", \"station wagon\", \"lighthouse\", \"beaker\", \"military hat (bearskin or shako)\", \"beer bottle\", \"beer glass\", \"bell tower\", \"baby bib\", \"tandem bicycle\", \"bikini\", \"ring binder\", \"binoculars\", \"birdhouse\", \"boathouse\", \"bobsleigh\", \"bolo tie\", \"poke bonnet\", \"bookcase\", \"bookstore\", \"bottle cap\", \"hunting bow\", \"bow tie\", \"brass memorial plaque\", \"bra\", \"breakwater\", \"breastplate\", \"broom\", \"bucket\", \"buckle\", \"bulletproof vest\", \"high-speed train\", \"butcher shop\", \"taxicab\", \"cauldron\", \"candle\", \"cannon\", \"canoe\", \"can opener\", \"cardigan\", \"car mirror\", \"carousel\", \"tool kit\", \"cardboard box / carton\", \"car wheel\", \"automated teller machine\", \"cassette\", \"cassette player\", \"castle\", \"catamaran\", \"CD player\", \"cello\", \"mobile phone\", \"chain\", \"chain-link fence\", \"chain mail\", \"chainsaw\", \"storage chest\", \"chiffonier\", \"bell or wind chime\", \"china cabinet\", \"Christmas stocking\", \"church\", \"movie theater\", \"cleaver\", \"cliff dwelling\", \"cloak\", \"clogs\", \"cocktail shaker\", \"coffee mug\", \"coffeemaker\", \"spiral or coil\", \"combination lock\", \"computer keyboard\", \"candy store\", \"container ship\", \"convertible\", \"corkscrew\", \"cornet\", \"cowboy boot\", \"cowboy hat\", \"cradle\", \"construction crane\", \"crash helmet\", \"crate\", \"infant bed\", \"Crock Pot\", \"croquet ball\", \"crutch\", \"cuirass\", \"dam\", \"desk\", \"desktop computer\", \"rotary dial telephone\", \"diaper\", \"digital clock\", \"digital watch\", \"dining table\", \"dishcloth\", \"dishwasher\", \"disc brake\", \"dock\", \"dog sled\", \"dome\", \"doormat\", \"drilling rig\", \"drum\", \"drumstick\", \"dumbbell\", \"Dutch oven\", \"electric fan\", \"electric guitar\", \"electric locomotive\", \"entertainment center\", \"envelope\", \"espresso machine\", \"face powder\", \"feather boa\", \"filing cabinet\", \"fireboat\", \"fire truck\", \"fire screen\", \"flagpole\", \"flute\", \"folding chair\", \"football helmet\", \"forklift\", \"fountain\", \"fountain pen\", \"four-poster bed\", \"freight car\", \"French horn\", \"frying pan\", \"fur coat\", \"garbage truck\", \"gas mask or respirator\", \"gas pump\", \"goblet\", \"go-kart\", \"golf ball\", \"golf cart\", \"gondola\", \"gong\", \"gown\", \"grand piano\", \"greenhouse\", \"radiator grille\", \"grocery store\", \"guillotine\", \"hair clip\", \"hair spray\", \"half-track\", \"hammer\", \"hamper\", \"hair dryer\", \"hand-held computer\", \"handkerchief\", \"hard disk drive\", \"harmonica\", \"harp\", \"combine harvester\", \"hatchet\", \"holster\", \"home theater\", \"honeycomb\", \"hook\", \"hoop skirt\", \"gymnastic horizontal bar\", \"horse-drawn vehicle\", \"hourglass\", \"iPod\", \"clothes iron\", \"carved pumpkin\", \"jeans\", \"jeep\", \"T-shirt\", \"jigsaw puzzle\", \"rickshaw\", \"joystick\", \"kimono\", \"knee pad\", \"knot\", \"lab coat\", \"ladle\", \"lampshade\", \"laptop computer\", \"lawn mower\", \"lens cap\", \"letter opener\", \"library\", \"lifeboat\", \"lighter\", \"limousine\", \"ocean liner\", \"lipstick\", \"slip-on shoe\", \"lotion\", \"music speaker\", \"loupe magnifying glass\", \"sawmill\", \"magnetic compass\", \"messenger bag\", \"mailbox\", \"tights\", \"one-piece bathing suit\", \"manhole cover\", \"maraca\", \"marimba\", \"mask\", \"matchstick\", \"maypole\", \"maze\", \"measuring cup\", \"medicine cabinet\", \"megalith\", \"microphone\", \"microwave oven\", \"military uniform\", \"milk can\", \"minibus\", \"miniskirt\", \"minivan\", \"missile\", \"mitten\", \"mixing bowl\", \"mobile home\", \"ford model t\", \"modem\", \"monastery\", \"monitor\", \"moped\", \"mortar and pestle\", \"graduation cap\", \"mosque\", \"mosquito net\", \"vespa\", \"mountain bike\", \"tent\", \"computer mouse\", \"mousetrap\", \"moving van\", \"muzzle\", \"metal nail\", \"neck brace\", \"necklace\", \"baby pacifier\", \"notebook computer\", \"obelisk\", \"oboe\", \"ocarina\", \"odometer\", \"oil filter\", \"pipe organ\", \"oscilloscope\", \"overskirt\", \"bullock cart\", \"oxygen mask\", \"product packet / packaging\", \"paddle\", \"paddle wheel\", \"padlock\", \"paintbrush\", \"pajamas\", \"palace\", \"pan flute\", \"paper towel\", \"parachute\", \"parallel bars\", \"park bench\", \"parking meter\", \"railroad car\", \"patio\", \"payphone\", \"pedestal\", \"pencil case\", \"pencil sharpener\", \"perfume\", \"Petri dish\", \"photocopier\", \"plectrum\", \"Pickelhaube\", \"picket fence\", \"pickup truck\", \"pier\", \"piggy bank\", \"pill bottle\", \"pillow\", \"ping-pong ball\", \"pinwheel\", \"pirate ship\", \"drink pitcher\", \"block plane\", \"planetarium\", \"plastic bag\", \"plate rack\", \"farm plow\", \"plunger\", \"Polaroid camera\", \"pole\", \"police van\", \"poncho\", \"pool table\", \"soda bottle\", \"plant pot\", \"potter's wheel\", \"power drill\", \"prayer rug\", \"printer\", \"prison\", \"missile\", \"projector\", \"hockey puck\", \"punching bag\", \"purse\", \"quill\", \"quilt\", \"race car\", \"racket\", \"radiator\", \"radio\", \"radio telescope\", \"rain barrel\", \"recreational vehicle\", \"fishing casting reel\", \"reflex camera\", \"refrigerator\", \"remote control\", \"restaurant\", \"revolver\", \"rifle\", \"rocking chair\", \"rotisserie\", \"eraser\", \"rugby ball\", \"ruler measuring stick\", \"sneaker\", \"safe\", \"safety pin\", \"salt shaker\", \"sandal\", \"sarong\", \"saxophone\", \"scabbard\", \"weighing scale\", \"school bus\", \"schooner\", \"scoreboard\", \"CRT monitor\", \"screw\", \"screwdriver\", \"seat belt\", \"sewing machine\", \"shield\", \"shoe store\", \"shoji screen / room divider\", \"shopping basket\", \"shopping cart\", \"shovel\", \"shower cap\", \"shower curtain\", \"ski\", \"balaclava ski mask\", \"sleeping bag\", \"slide rule\", \"sliding door\", \"slot machine\", \"snorkel\", \"snowmobile\", \"snowplow\", \"soap dispenser\", \"soccer ball\", \"sock\", \"solar thermal collector\", \"sombrero\", \"soup bowl\", \"keyboard space bar\", \"space heater\", \"space shuttle\", \"spatula\", \"motorboat\", \"spider web\", \"spindle\", \"sports car\", \"spotlight\", \"stage\", \"steam locomotive\", \"through arch bridge\", \"steel drum\", \"stethoscope\", \"scarf\", \"stone wall\", \"stopwatch\", \"stove\", \"strainer\", \"tram\", \"stretcher\", \"couch\", \"stupa\", \"submarine\", \"suit\", \"sundial\", \"sunglasses\", \"sunglasses\", \"sunscreen\", \"suspension bridge\", \"mop\", \"sweatshirt\", \"swim trunks / shorts\", \"swing\", \"electrical switch\", \"syringe\", \"table lamp\", \"tank\", \"tape player\", \"teapot\", \"teddy bear\", \"television\", \"tennis ball\", \"thatched roof\", \"front curtain\", \"thimble\", \"threshing machine\", \"throne\", \"tile roof\", \"toaster\", \"tobacco shop\", \"toilet seat\", \"torch\", \"totem pole\", \"tow truck\", \"toy store\", \"tractor\", \"semi-trailer truck\", \"tray\", \"trench coat\", \"tricycle\", \"trimaran\", \"tripod\", \"triumphal arch\", \"trolleybus\", \"trombone\", \"hot tub\", \"turnstile\", \"typewriter keyboard\", \"umbrella\", \"unicycle\", \"upright piano\", \"vacuum cleaner\", \"vase\", \"vaulted or arched ceiling\", \"velvet fabric\", \"vending machine\", \"vestment\", \"viaduct\", \"violin\", \"volleyball\", \"waffle iron\", \"wall clock\", \"wallet\", \"wardrobe\", \"military aircraft\", \"sink\", \"washing machine\", \"water bottle\", \"water jug\", \"water tower\", \"whiskey jug\", \"whistle\", \"hair wig\", \"window screen\", \"window shade\", \"Windsor tie\", \"wine bottle\", \"airplane wing\", \"wok\", \"wooden spoon\", \"wool\", \"split-rail fence\", \"shipwreck\", \"sailboat\", \"yurt\", \"website\", \"comic book\", \"crossword\", \"traffic or street sign\", \"traffic light\", \"dust jacket\", \"menu\", \"plate\", \"guacamole\", \"consomme\", \"hot pot\", \"trifle\", \"ice cream\", \"popsicle\", \"baguette\", \"bagel\", \"pretzel\", \"cheeseburger\", \"hot dog\", \"mashed potatoes\", \"cabbage\", \"broccoli\", \"cauliflower\", \"zucchini\", \"spaghetti squash\", \"acorn squash\", \"butternut squash\", \"cucumber\", \"artichoke\", \"bell pepper\", \"cardoon\", \"mushroom\", \"Granny Smith apple\", \"strawberry\", \"orange\", \"lemon\", \"fig\", \"pineapple\", \"banana\", \"jackfruit\", \"cherimoya (custard apple)\", \"pomegranate\", \"hay\", \"carbonara\", \"chocolate syrup\", \"dough\", \"meatloaf\", \"pizza\", \"pot pie\", \"burrito\", \"red wine\", \"espresso\", \"tea cup\", \"eggnog\", \"mountain\", \"bubble\", \"cliff\", \"coral reef\", \"geyser\", \"lakeshore\", \"promontory\", \"sandbar\", \"beach\", \"valley\", \"volcano\", \"baseball player\", \"bridegroom\", \"scuba diver\", \"rapeseed\", \"daisy\", \"yellow lady's slipper\", \"corn\", \"acorn\", \"rose hip\", \"horse chestnut seed\", \"coral fungus\", \"agaric\", \"gyromitra\", \"stinkhorn mushroom\", \"earth star fungus\", \"hen of the woods mushroom\", \"bolete\", \"corn cob\", \"toilet paper\"]"
800
+ ],
801
+ "execution_count": 8,
802
+ "outputs": []
803
+ },
804
+ {
805
+ "cell_type": "markdown",
806
+ "metadata": {
807
+ "id": "eMQSCuBta2G6"
808
+ },
809
+ "source": [
810
+ "A subset of these class names are modified from the default ImageNet class names sourced from Anish Athalye's imagenet-simple-labels.\n",
811
+ "\n",
812
+ "These edits were made via trial and error and concentrated on the lowest performing classes according to top_1 and top_5 accuracy on the ImageNet training set for the RN50, RN101, and RN50x4 models. These tweaks improve top_1 by 1.5% on ViT-B/32 over using the default class names. Alec got bored somewhere along the way as gains started to diminish and never finished updating / tweaking the list. He also didn't revisit this with the better performing RN50x16, RN50x64, or any of the ViT models. He thinks it's likely another 0.5% to 1% top_1 could be gained from further work here. It'd be interesting to more rigorously study / understand this.\n",
813
+ "\n",
814
+ "Some examples beyond the crane/crane -> construction crane / bird crane issue mentioned in Section 3.1.4 of the paper include:\n",
815
+ "\n",
816
+ "- CLIP interprets \"nail\" as \"fingernail\" so we changed the label to \"metal nail\".\n",
817
+ "- ImageNet kite class refers to the bird of prey, not the flying toy, so we changed \"kite\" to \"kite (bird of prey)\"\n",
818
+ "- The ImageNet class for red wolf seems to include a lot of mislabeled maned wolfs so we changed \"red wolf\" to \"red wolf or maned wolf\""
819
+ ]
820
+ },
821
+ {
822
+ "cell_type": "code",
823
+ "metadata": {
824
+ "id": "toGtcd-Ji_MD",
825
+ "colab": {
826
+ "base_uri": "https://localhost:8080/"
827
+ },
828
+ "outputId": "46bcc85f-3968-4836-f3c6-e48848e944c4"
829
+ },
830
+ "source": [
831
+ "imagenet_templates = [\n",
832
+ " 'a bad photo of a {}.',\n",
833
+ " 'a photo of many {}.',\n",
834
+ " 'a sculpture of a {}.',\n",
835
+ " 'a photo of the hard to see {}.',\n",
836
+ " 'a low resolution photo of the {}.',\n",
837
+ " 'a rendering of a {}.',\n",
838
+ " 'graffiti of a {}.',\n",
839
+ " 'a bad photo of the {}.',\n",
840
+ " 'a cropped photo of the {}.',\n",
841
+ " 'a tattoo of a {}.',\n",
842
+ " 'the embroidered {}.',\n",
843
+ " 'a photo of a hard to see {}.',\n",
844
+ " 'a bright photo of a {}.',\n",
845
+ " 'a photo of a clean {}.',\n",
846
+ " 'a photo of a dirty {}.',\n",
847
+ " 'a dark photo of the {}.',\n",
848
+ " 'a drawing of a {}.',\n",
849
+ " 'a photo of my {}.',\n",
850
+ " 'the plastic {}.',\n",
851
+ " 'a photo of the cool {}.',\n",
852
+ " 'a close-up photo of a {}.',\n",
853
+ " 'a black and white photo of the {}.',\n",
854
+ " 'a painting of the {}.',\n",
855
+ " 'a painting of a {}.',\n",
856
+ " 'a pixelated photo of the {}.',\n",
857
+ " 'a sculpture of the {}.',\n",
858
+ " 'a bright photo of the {}.',\n",
859
+ " 'a cropped photo of a {}.',\n",
860
+ " 'a plastic {}.',\n",
861
+ " 'a photo of the dirty {}.',\n",
862
+ " 'a jpeg corrupted photo of a {}.',\n",
863
+ " 'a blurry photo of the {}.',\n",
864
+ " 'a photo of the {}.',\n",
865
+ " 'a good photo of the {}.',\n",
866
+ " 'a rendering of the {}.',\n",
867
+ " 'a {} in a video game.',\n",
868
+ " 'a photo of one {}.',\n",
869
+ " 'a doodle of a {}.',\n",
870
+ " 'a close-up photo of the {}.',\n",
871
+ " 'a photo of a {}.',\n",
872
+ " 'the origami {}.',\n",
873
+ " 'the {} in a video game.',\n",
874
+ " 'a sketch of a {}.',\n",
875
+ " 'a doodle of the {}.',\n",
876
+ " 'a origami {}.',\n",
877
+ " 'a low resolution photo of a {}.',\n",
878
+ " 'the toy {}.',\n",
879
+ " 'a rendition of the {}.',\n",
880
+ " 'a photo of the clean {}.',\n",
881
+ " 'a photo of a large {}.',\n",
882
+ " 'a rendition of a {}.',\n",
883
+ " 'a photo of a nice {}.',\n",
884
+ " 'a photo of a weird {}.',\n",
885
+ " 'a blurry photo of a {}.',\n",
886
+ " 'a cartoon {}.',\n",
887
+ " 'art of a {}.',\n",
888
+ " 'a sketch of the {}.',\n",
889
+ " 'a embroidered {}.',\n",
890
+ " 'a pixelated photo of a {}.',\n",
891
+ " 'itap of the {}.',\n",
892
+ " 'a jpeg corrupted photo of the {}.',\n",
893
+ " 'a good photo of a {}.',\n",
894
+ " 'a plushie {}.',\n",
895
+ " 'a photo of the nice {}.',\n",
896
+ " 'a photo of the small {}.',\n",
897
+ " 'a photo of the weird {}.',\n",
898
+ " 'the cartoon {}.',\n",
899
+ " 'art of the {}.',\n",
900
+ " 'a drawing of the {}.',\n",
901
+ " 'a photo of the large {}.',\n",
902
+ " 'a black and white photo of a {}.',\n",
903
+ " 'the plushie {}.',\n",
904
+ " 'a dark photo of a {}.',\n",
905
+ " 'itap of a {}.',\n",
906
+ " 'graffiti of the {}.',\n",
907
+ " 'a toy {}.',\n",
908
+ " 'itap of my {}.',\n",
909
+ " 'a photo of a cool {}.',\n",
910
+ " 'a photo of a small {}.',\n",
911
+ " 'a tattoo of the {}.',\n",
912
+ "]\n",
913
+ "\n",
914
+ "print(f\"{len(imagenet_classes)} classes, {len(imagenet_templates)} templates\")"
915
+ ],
916
+ "execution_count": 9,
917
+ "outputs": [
918
+ {
919
+ "output_type": "stream",
920
+ "text": [
921
+ "1000 classes, 80 templates\n"
922
+ ],
923
+ "name": "stdout"
924
+ }
925
+ ]
926
+ },
927
+ {
928
+ "cell_type": "markdown",
929
+ "metadata": {
930
+ "id": "aRB5OzgpHwqQ"
931
+ },
932
+ "source": [
933
+ "A similar, intuition-guided trial and error based on the ImageNet training set was used for templates. This list is pretty haphazard and was gradually made / expanded over the course of about a year of the project and was revisited / tweaked every few months. A surprising / weird thing was adding templates intended to help ImageNet-R performance (specifying different possible renditions of an object) improved standard ImageNet accuracy too.\n",
934
+ "\n",
935
+ "After the 80 templates were \"locked\" for the paper, we ran sequential forward selection over the list of 80 templates. The search terminated after ensembling 7 templates and selected them in the order below.\n",
936
+ "\n",
937
+ "1. itap of a {}.\n",
938
+ "2. a bad photo of the {}.\n",
939
+ "3. a origami {}.\n",
940
+ "4. a photo of the large {}.\n",
941
+ "5. a {} in a video game.\n",
942
+ "6. art of the {}.\n",
943
+ "7. a photo of the small {}.\n",
944
+ "\n",
945
+ "Speculating, we think it's interesting to see different scales (large and small), a difficult view (a bad photo), and \"abstract\" versions (origami, video game, art), were all selected for, but we haven't studied this in any detail. This subset performs a bit better than the full 80 ensemble reported in the paper, especially for the smaller models."
946
+ ]
947
+ },
948
+ {
949
+ "cell_type": "markdown",
950
+ "metadata": {
951
+ "id": "4W8ARJVqBJXs"
952
+ },
953
+ "source": [
954
+ "# Loading the Images\n",
955
+ "\n",
956
+ "The ILSVRC2012 datasets are no longer available for download publicly. We instead download the ImageNet-V2 dataset by [Recht et al.](https://arxiv.org/abs/1902.10811).\n",
957
+ "\n",
958
+ "If you have the ImageNet dataset downloaded, you can replace the dataset with the official torchvision loader, e.g.:\n",
959
+ "\n",
960
+ "```python\n",
961
+ "images = torchvision.datasets.ImageNet(\"path/to/imagenet\", split='val', transform=preprocess)\n",
962
+ "```"
963
+ ]
964
+ },
965
+ {
966
+ "cell_type": "code",
967
+ "metadata": {
968
+ "colab": {
969
+ "base_uri": "https://localhost:8080/"
970
+ },
971
+ "id": "moHR4UlHKsDc",
972
+ "outputId": "178f6d0d-9a34-4cbc-c9c1-e7ce09927980"
973
+ },
974
+ "source": [
975
+ "! pip install git+https://github.com/modestyachts/ImageNetV2_pytorch\n",
976
+ "\n",
977
+ "from imagenetv2_pytorch import ImageNetV2Dataset\n",
978
+ "\n",
979
+ "images = ImageNetV2Dataset(transform=preprocess)\n",
980
+ "loader = torch.utils.data.DataLoader(images, batch_size=32, num_workers=16)"
981
+ ],
982
+ "execution_count": 10,
983
+ "outputs": [
984
+ {
985
+ "output_type": "stream",
986
+ "text": [
987
+ "Collecting git+https://github.com/modestyachts/ImageNetV2_pytorch\n",
988
+ " Cloning https://github.com/modestyachts/ImageNetV2_pytorch to /tmp/pip-req-build-2fnslbyv\n",
989
+ " Running command git clone -q https://github.com/modestyachts/ImageNetV2_pytorch /tmp/pip-req-build-2fnslbyv\n",
990
+ "Building wheels for collected packages: imagenetv2-pytorch\n",
991
+ " Building wheel for imagenetv2-pytorch (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
992
+ " Created wheel for imagenetv2-pytorch: filename=imagenetv2_pytorch-0.1-cp36-none-any.whl size=2665 sha256=0978fc64026ab86ace52a9f3ebcef53331c43288433173c450a4b5ddcc197f31\n",
993
+ " Stored in directory: /tmp/pip-ephem-wheel-cache-4eewuaap/wheels/f7/09/0d/03ded955ce95b04c9590b999ae9be076bb5d8f389650aa2147\n",
994
+ "Successfully built imagenetv2-pytorch\n",
995
+ "Installing collected packages: imagenetv2-pytorch\n",
996
+ "Successfully installed imagenetv2-pytorch-0.1\n",
997
+ "Dataset matched-frequency not found on disk, downloading....\n"
998
+ ],
999
+ "name": "stdout"
1000
+ },
1001
+ {
1002
+ "output_type": "stream",
1003
+ "text": [
1004
+ "100%|██████████| 1.26G/1.26G [00:35<00:00, 35.7MiB/s]\n"
1005
+ ],
1006
+ "name": "stderr"
1007
+ },
1008
+ {
1009
+ "output_type": "stream",
1010
+ "text": [
1011
+ "Extracting....\n"
1012
+ ],
1013
+ "name": "stdout"
1014
+ }
1015
+ ]
1016
+ },
1017
+ {
1018
+ "cell_type": "markdown",
1019
+ "metadata": {
1020
+ "id": "fz6D-F-Wbrtp"
1021
+ },
1022
+ "source": [
1023
+ "# Creating zero-shot classifier weights"
1024
+ ]
1025
+ },
1026
+ {
1027
+ "cell_type": "code",
1028
+ "metadata": {
1029
+ "colab": {
1030
+ "base_uri": "https://localhost:8080/",
1031
+ "height": 66,
1032
+ "referenced_widgets": [
1033
+ "4e3a3f83649f45f8bef3434980634664",
1034
+ "f066bdb766664c788ba1e9de8d311e22",
1035
+ "4e7a7427d28a4ae684e0be4548eb9944",
1036
+ "cc9dc019c1334a46b2558ffa6c0dd6e6",
1037
+ "285c877d4f644f3a8a58c4eb5948101c",
1038
+ "075d6545e02e419ca565589eb5ffc318",
1039
+ "53f9106c80e84d5b8c3ec96162d1db98",
1040
+ "19c57d99e7c44cbda508ce558fde435d"
1041
+ ]
1042
+ },
1043
+ "id": "sRqDoz1Gbsii",
1044
+ "outputId": "5ab6c001-8a5e-42c9-ab46-4477a693229c"
1045
+ },
1046
+ "source": [
1047
+ "def zeroshot_classifier(classnames, templates):\n",
1048
+ " with torch.no_grad():\n",
1049
+ " zeroshot_weights = []\n",
1050
+ " for classname in tqdm(classnames):\n",
1051
+ " texts = [template.format(classname) for template in templates] #format with class\n",
1052
+ " texts = clip.tokenize(texts).cuda() #tokenize\n",
1053
+ " class_embeddings = model.encode_text(texts) #embed with text encoder\n",
1054
+ " class_embeddings /= class_embeddings.norm(dim=-1, keepdim=True)\n",
1055
+ " class_embedding = class_embeddings.mean(dim=0)\n",
1056
+ " class_embedding /= class_embedding.norm()\n",
1057
+ " zeroshot_weights.append(class_embedding)\n",
1058
+ " zeroshot_weights = torch.stack(zeroshot_weights, dim=1).cuda()\n",
1059
+ " return zeroshot_weights\n",
1060
+ "\n",
1061
+ "\n",
1062
+ "zeroshot_weights = zeroshot_classifier(imagenet_classes, imagenet_templates)"
1063
+ ],
1064
+ "execution_count": 11,
1065
+ "outputs": [
1066
+ {
1067
+ "output_type": "display_data",
1068
+ "data": {
1069
+ "application/vnd.jupyter.widget-view+json": {
1070
+ "model_id": "4e3a3f83649f45f8bef3434980634664",
1071
+ "version_minor": 0,
1072
+ "version_major": 2
1073
+ },
1074
+ "text/plain": [
1075
+ "HBox(children=(FloatProgress(value=0.0, max=1000.0), HTML(value='')))"
1076
+ ]
1077
+ },
1078
+ "metadata": {
1079
+ "tags": []
1080
+ }
1081
+ },
1082
+ {
1083
+ "output_type": "stream",
1084
+ "text": [
1085
+ "\n"
1086
+ ],
1087
+ "name": "stdout"
1088
+ }
1089
+ ]
1090
+ },
1091
+ {
1092
+ "cell_type": "markdown",
1093
+ "metadata": {
1094
+ "id": "1fZo7hG8iJP5"
1095
+ },
1096
+ "source": [
1097
+ "# Zero-shot prediction"
1098
+ ]
1099
+ },
1100
+ {
1101
+ "cell_type": "code",
1102
+ "metadata": {
1103
+ "id": "j4kPSZoShQxN"
1104
+ },
1105
+ "source": [
1106
+ "def accuracy(output, target, topk=(1,)):\n",
1107
+ " pred = output.topk(max(topk), 1, True, True)[1].t()\n",
1108
+ " correct = pred.eq(target.view(1, -1).expand_as(pred))\n",
1109
+ " return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk]"
1110
+ ],
1111
+ "execution_count": 12,
1112
+ "outputs": []
1113
+ },
1114
+ {
1115
+ "cell_type": "code",
1116
+ "metadata": {
1117
+ "colab": {
1118
+ "base_uri": "https://localhost:8080/",
1119
+ "height": 100,
1120
+ "referenced_widgets": [
1121
+ "fbb2b937b22049f5987f39f48c652a86",
1122
+ "0a1b6b76984349ccb36ca2fc4a4a0208",
1123
+ "c136afb47aa14ac2832093ee415c6f3e",
1124
+ "467a151e73744eccb199fe72aa352e5b",
1125
+ "f6d637c3fc3c46928d023441227130e5",
1126
+ "029e6eadacb8480193aab52ff073be8f",
1127
+ "30178355f76742898d37966b3875ef0a",
1128
+ "2e62544c03d64d6d92b94fcfaca2fc90"
1129
+ ]
1130
+ },
1131
+ "id": "wKJ7YsdlkDXo",
1132
+ "outputId": "90e084fd-86bc-4a52-a06e-61bff7aa86e0"
1133
+ },
1134
+ "source": [
1135
+ "with torch.no_grad():\n",
1136
+ " top1, top5, n = 0., 0., 0.\n",
1137
+ " for i, (images, target) in enumerate(tqdm(loader)):\n",
1138
+ " images = images.cuda()\n",
1139
+ " target = target.cuda()\n",
1140
+ " \n",
1141
+ " # predict\n",
1142
+ " image_features = model.encode_image(images)\n",
1143
+ " image_features /= image_features.norm(dim=-1, keepdim=True)\n",
1144
+ " logits = 100. * image_features @ zeroshot_weights\n",
1145
+ "\n",
1146
+ " # measure accuracy\n",
1147
+ " acc1, acc5 = accuracy(logits, target, topk=(1, 5))\n",
1148
+ " top1 += acc1\n",
1149
+ " top5 += acc5\n",
1150
+ " n += images.size(0)\n",
1151
+ "\n",
1152
+ "top1 = (top1 / n) * 100\n",
1153
+ "top5 = (top5 / n) * 100 \n",
1154
+ "\n",
1155
+ "print(f\"Top-1 accuracy: {top1:.2f}\")\n",
1156
+ "print(f\"Top-5 accuracy: {top5:.2f}\")"
1157
+ ],
1158
+ "execution_count": 13,
1159
+ "outputs": [
1160
+ {
1161
+ "output_type": "display_data",
1162
+ "data": {
1163
+ "application/vnd.jupyter.widget-view+json": {
1164
+ "model_id": "fbb2b937b22049f5987f39f48c652a86",
1165
+ "version_minor": 0,
1166
+ "version_major": 2
1167
+ },
1168
+ "text/plain": [
1169
+ "HBox(children=(FloatProgress(value=0.0, max=313.0), HTML(value='')))"
1170
+ ]
1171
+ },
1172
+ "metadata": {
1173
+ "tags": []
1174
+ }
1175
+ },
1176
+ {
1177
+ "output_type": "stream",
1178
+ "text": [
1179
+ "\n",
1180
+ "Top-1 accuracy: 55.73\n",
1181
+ "Top-5 accuracy: 83.45\n"
1182
+ ],
1183
+ "name": "stdout"
1184
+ }
1185
+ ]
1186
+ }
1187
+ ]
1188
+ }
CLIP_/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ftfy
2
+ regex
3
+ tqdm
4
+ torch~=1.7.1
5
+ torchvision~=0.8.2
CLIP_/setup.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import pkg_resources
4
+ from setuptools import setup, find_packages
5
+
6
+ setup(
7
+ name="clip",
8
+ py_modules=["clip"],
9
+ version="1.0",
10
+ description="",
11
+ author="OpenAI",
12
+ packages=find_packages(exclude=["tests*"]),
13
+ install_requires=[
14
+ str(r)
15
+ for r in pkg_resources.parse_requirements(
16
+ open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
17
+ )
18
+ ],
19
+ include_package_data=True,
20
+ extras_require={'dev': ['pytest']},
21
+ )
CLIP_/tests/test_consistency.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ import pytest
3
+ import torch
4
+ from PIL import Image
5
+
6
+ import clip
7
+
8
+
9
+ @pytest.mark.parametrize('model_name', clip.available_models())
10
+ def test_consistency(model_name):
11
+ device = "cpu"
12
+ jit_model, transform = clip.load(model_name, device=device)
13
+ py_model, _ = clip.load(model_name, device=device, jit=False)
14
+
15
+ image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device)
16
+ text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
17
+
18
+ with torch.no_grad():
19
+ logits_per_image, _ = jit_model(image, text)
20
+ jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
21
+
22
+ logits_per_image, _ = py_model(image, text)
23
+ py_probs = logits_per_image.softmax(dim=-1).cpu().numpy()
24
+
25
+ assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1)