BigData-AI @ KSU commited on
Commit
e3d76d4
1 Parent(s): 653609a

adding clip file since it was modified

Browse files
CLIP/CLIP.png ADDED
CLIP/Interacting_with_CLIP.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
CLIP/LICENSE ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2021 OpenAI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
CLIP/README.md ADDED
@@ -0,0 +1,192 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLIP
2
+
3
+ [[Blog]](https://openai.com/blog/clip/) [[Paper]](https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf) [[Model Card]](model-card.md) [[Colab]](https://colab.research.google.com/github/openai/clip/blob/master/Interacting_with_CLIP.ipynb)
4
+
5
+ CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.
6
+
7
+
8
+
9
+ ## Approach
10
+
11
+ ![CLIP](CLIP.png)
12
+
13
+
14
+
15
+ ## Usage
16
+
17
+ First, [install PyTorch 1.7.1](https://pytorch.org/get-started/locally/) and torchvision, as well as small additional dependencies. On a CUDA GPU machine, the following will do the trick:
18
+
19
+ ```bash
20
+ $ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
21
+ $ pip install ftfy regex tqdm
22
+ ```
23
+
24
+ Replace `cudatoolkit=11.0` above with the appropriate CUDA version on your machine or `cpuonly` when installing on a machine without a GPU.
25
+
26
+ ```python
27
+ import torch
28
+ import clip
29
+ from PIL import Image
30
+
31
+ device = "cuda" if torch.cuda.is_available() else "cpu"
32
+ model, preprocess = clip.load("ViT-B/32", device=device)
33
+
34
+ image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
35
+ text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
36
+
37
+ with torch.no_grad():
38
+ image_features = model.encode_image(image)
39
+ text_features = model.encode_text(text)
40
+
41
+ logits_per_image, logits_per_text = model(image, text)
42
+ probs = logits_per_image.softmax(dim=-1).cpu().numpy()
43
+
44
+ print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
45
+ ```
46
+
47
+
48
+ ## API
49
+
50
+ The CLIP module `clip` provides the following methods:
51
+
52
+ #### `clip.available_models()`
53
+
54
+ Returns the name(s) of the available CLIP models.
55
+
56
+ #### `clip.load(name, device=..., jit=True)`
57
+
58
+ Returns the model and the TorchVision transform needed by the model, specified by the model name returned by `clip.available_models()`. It will download the model as necessary. The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU.
59
+
60
+ When `jit` is `False`, a non-JIT version of the model will be loaded.
61
+
62
+ #### `clip.tokenize(text: Union[str, List[str]], context_length=77)`
63
+
64
+ Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model
65
+
66
+ ---
67
+
68
+ The model returned by `clip.load()` supports the following methods:
69
+
70
+ #### `model.encode_image(image: Tensor)`
71
+
72
+ Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.
73
+
74
+ #### `model.encode_text(text: Tensor)`
75
+
76
+ Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.
77
+
78
+ #### `model(image: Tensor, text: Tensor)`
79
+
80
+ Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.
81
+
82
+
83
+
84
+ ## More Examples
85
+
86
+ ### Zero-Shot Prediction
87
+
88
+ The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the [CIFAR-100 dataset](https://www.cs.toronto.edu/~kriz/cifar.html), and predicts the most likely labels among the 100 textual labels from the dataset.
89
+
90
+ ```python
91
+ import os
92
+ import clip
93
+ import torch
94
+ from torchvision.datasets import CIFAR100
95
+
96
+ # Load the model
97
+ device = "cuda" if torch.cuda.is_available() else "cpu"
98
+ model, preprocess = clip.load('ViT-B/32', device)
99
+
100
+ # Download the dataset
101
+ cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)
102
+
103
+ # Prepare the inputs
104
+ image, class_id = cifar100[3637]
105
+ image_input = preprocess(image).unsqueeze(0).to(device)
106
+ text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)
107
+
108
+ # Calculate features
109
+ with torch.no_grad():
110
+ image_features = model.encode_image(image_input)
111
+ text_features = model.encode_text(text_inputs)
112
+
113
+ # Pick the top 5 most similar labels for the image
114
+ image_features /= image_features.norm(dim=-1, keepdim=True)
115
+ text_features /= text_features.norm(dim=-1, keepdim=True)
116
+ similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
117
+ values, indices = similarity[0].topk(5)
118
+
119
+ # Print the result
120
+ print("\nTop predictions:\n")
121
+ for value, index in zip(values, indices):
122
+ print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")
123
+ ```
124
+
125
+ The output will look like the following (the exact numbers may be slightly different depending on the compute device):
126
+
127
+ ```
128
+ Top predictions:
129
+
130
+ snake: 65.31%
131
+ turtle: 12.29%
132
+ sweet_pepper: 3.83%
133
+ lizard: 1.88%
134
+ crocodile: 1.75%
135
+ ```
136
+
137
+ Note that this example uses the `encode_image()` and `encode_text()` methods that return the encoded features of given inputs.
138
+
139
+
140
+ ### Linear-probe evaluation
141
+
142
+ The example below uses [scikit-learn](https://scikit-learn.org/) to perform logistic regression on image features.
143
+
144
+ ```python
145
+ import os
146
+ import clip
147
+ import torch
148
+
149
+ import numpy as np
150
+ from sklearn.linear_model import LogisticRegression
151
+ from torch.utils.data import DataLoader
152
+ from torchvision.datasets import CIFAR100
153
+ from tqdm import tqdm
154
+
155
+ # Load the model
156
+ device = "cuda" if torch.cuda.is_available() else "cpu"
157
+ model, preprocess = clip.load('ViT-B/32', device)
158
+
159
+ # Load the dataset
160
+ root = os.path.expanduser("~/.cache")
161
+ train = CIFAR100(root, download=True, train=True, transform=preprocess)
162
+ test = CIFAR100(root, download=True, train=False, transform=preprocess)
163
+
164
+
165
+ def get_features(dataset):
166
+ all_features = []
167
+ all_labels = []
168
+
169
+ with torch.no_grad():
170
+ for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
171
+ features = model.encode_image(images.to(device))
172
+
173
+ all_features.append(features)
174
+ all_labels.append(labels)
175
+
176
+ return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()
177
+
178
+ # Calculate the image features
179
+ train_features, train_labels = get_features(train)
180
+ test_features, test_labels = get_features(test)
181
+
182
+ # Perform logistic regression
183
+ classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
184
+ classifier.fit(train_features, train_labels)
185
+
186
+ # Evaluate using the logistic regression classifier
187
+ predictions = classifier.predict(test_features)
188
+ accuracy = np.mean((test_labels == predictions).astype(np.float)) * 100.
189
+ print(f"Accuracy = {accuracy:.3f}")
190
+ ```
191
+
192
+ Note that the `C` value should be determined via a hyperparameter sweep using a validation split.
CLIP/__pycache__/clip.cpython-36.pyc ADDED
Binary file (5.26 kB). View file
 
CLIP/__pycache__/clip.cpython-37.pyc ADDED
Binary file (7.75 kB). View file
 
CLIP/__pycache__/clip.cpython-38.pyc ADDED
Binary file (7.76 kB). View file
 
CLIP/__pycache__/clip.cpython-39.pyc ADDED
Binary file (7.83 kB). View file
 
CLIP/__pycache__/model.cpython-36.pyc ADDED
Binary file (14 kB). View file
 
CLIP/__pycache__/model.cpython-37.pyc ADDED
Binary file (15.9 kB). View file
 
CLIP/__pycache__/model.cpython-38.pyc ADDED
Binary file (15.6 kB). View file
 
CLIP/__pycache__/model.cpython-39.pyc ADDED
Binary file (15.5 kB). View file
 
CLIP/__pycache__/simple_tokenizer.cpython-36.pyc ADDED
Binary file (5.79 kB). View file
 
CLIP/__pycache__/simple_tokenizer.cpython-37.pyc ADDED
Binary file (5.74 kB). View file
 
CLIP/__pycache__/simple_tokenizer.cpython-38.pyc ADDED
Binary file (5.77 kB). View file
 
CLIP/__pycache__/simple_tokenizer.cpython-39.pyc ADDED
Binary file (5.73 kB). View file
 
CLIP/bpe_simple_vocab_16e6.txt.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924691ac288e54409236115652ad4aa250f48203de50a9e4722a6ecd48d6804a
3
+ size 1356917
CLIP/clip.py ADDED
@@ -0,0 +1,213 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import hashlib
2
+ import os
3
+ import urllib
4
+ import warnings
5
+ from typing import Union, List
6
+
7
+ import torch
8
+ from PIL import Image
9
+ from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
10
+ from tqdm import tqdm
11
+
12
+ from .model import build_model
13
+ from .simple_tokenizer import SimpleTokenizer as _Tokenizer
14
+
15
+ try:
16
+ from torchvision.transforms import InterpolationMode
17
+ BICUBIC = InterpolationMode.BICUBIC
18
+ except ImportError:
19
+ BICUBIC = Image.BICUBIC
20
+
21
+
22
+ if torch.__version__.split(".") < ["1", "7", "1"]:
23
+ warnings.warn("PyTorch version 1.7.1 or higher is recommended")
24
+
25
+
26
+ __all__ = ["available_models", "load", "tokenize"]
27
+ _tokenizer = _Tokenizer()
28
+
29
+ _MODELS = {
30
+ "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
31
+ "RN101": "https://openaipublic.azureedge.net/clip/models/8fa8567bab74a42d41c5915025a8e4538c3bdbe8804a470a72f30b0d94fab599/RN101.pt",
32
+ "RN50x4": "https://openaipublic.azureedge.net/clip/models/7e526bd135e493cef0776de27d5f42653e6b4c8bf9e0f653bb11773263205fdd/RN50x4.pt",
33
+ "RN50x16": "https://openaipublic.azureedge.net/clip/models/52378b407f34354e150460fe41077663dd5b39c54cd0bfd2b27167a4a06ec9aa/RN50x16.pt",
34
+ "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
35
+ "ViT-B/16": "https://openaipublic.azureedge.net/clip/models/5806e77cd80f8b59890b7e101eabd078d9fb84e6937f9e85e4ecb61988df416f/ViT-B-16.pt",
36
+ }
37
+
38
+
39
+ def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
40
+ os.makedirs(root, exist_ok=True)
41
+ filename = os.path.basename(url)
42
+
43
+ expected_sha256 = url.split("/")[-2]
44
+ download_target = os.path.join(root, filename)
45
+
46
+ if os.path.exists(download_target) and not os.path.isfile(download_target):
47
+ raise RuntimeError(f"{download_target} exists and is not a regular file")
48
+
49
+ if os.path.isfile(download_target):
50
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
51
+ return download_target
52
+ else:
53
+ warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
54
+
55
+ with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
56
+ with tqdm(total=int(source.info().get("Content-Length")), ncols=80, unit='iB', unit_scale=True) as loop:
57
+ while True:
58
+ buffer = source.read(8192)
59
+ if not buffer:
60
+ break
61
+
62
+ output.write(buffer)
63
+ loop.update(len(buffer))
64
+
65
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
66
+ raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
67
+
68
+ return download_target
69
+
70
+
71
+ def _transform(n_px):
72
+ return Compose([
73
+ Resize(n_px, interpolation=BICUBIC),
74
+ CenterCrop(n_px),
75
+ lambda image: image.convert("RGB"),
76
+ ToTensor(),
77
+ Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
78
+ ])
79
+
80
+
81
+ def available_models() -> List[str]:
82
+ """Returns the names of available CLIP models"""
83
+ return list(_MODELS.keys())
84
+
85
+
86
+ def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=False):
87
+ """Load a CLIP model
88
+ Parameters
89
+ ----------
90
+ name : str
91
+ A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict
92
+ device : Union[str, torch.device]
93
+ The device to put the loaded model
94
+ jit : bool
95
+ Whether to load the optimized JIT model or more hackable non-JIT model (default).
96
+ Returns
97
+ -------
98
+ model : torch.nn.Module
99
+ The CLIP model
100
+ preprocess : Callable[[PIL.Image], torch.Tensor]
101
+ A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
102
+ """
103
+ if name in _MODELS:
104
+ model_path = _download(_MODELS[name])
105
+ elif os.path.isfile(name):
106
+ model_path = name
107
+ else:
108
+ raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
109
+
110
+ try:
111
+ # loading JIT archive
112
+ model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
113
+ state_dict = None
114
+ except RuntimeError:
115
+ # loading saved state dict
116
+ if jit:
117
+ warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead")
118
+ jit = False
119
+ state_dict = torch.load(model_path, map_location="cpu")
120
+
121
+ if not jit:
122
+ print("Heree.....")
123
+ model = build_model(state_dict or model.state_dict()).to(device)
124
+ if str(device) == "cpu":
125
+ model.float()
126
+ return model, _transform(model.visual.input_resolution)
127
+
128
+ # patch the device names
129
+ device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
130
+ device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
131
+
132
+ def patch_device(module):
133
+ try:
134
+ graphs = [module.graph] if hasattr(module, "graph") else []
135
+ except RuntimeError:
136
+ graphs = []
137
+
138
+ if hasattr(module, "forward1"):
139
+ graphs.append(module.forward1.graph)
140
+
141
+ for graph in graphs:
142
+ for node in graph.findAllNodes("prim::Constant"):
143
+ if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
144
+ node.copyAttributes(device_node)
145
+
146
+ model.apply(patch_device)
147
+ patch_device(model.encode_image)
148
+ patch_device(model.encode_text)
149
+
150
+ # patch dtype to float32 on CPU
151
+ if str(device) == "cpu":
152
+ float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
153
+ float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
154
+ float_node = float_input.node()
155
+
156
+ def patch_float(module):
157
+ try:
158
+ graphs = [module.graph] if hasattr(module, "graph") else []
159
+ except RuntimeError:
160
+ graphs = []
161
+
162
+ if hasattr(module, "forward1"):
163
+ graphs.append(module.forward1.graph)
164
+
165
+ for graph in graphs:
166
+ for node in graph.findAllNodes("aten::to"):
167
+ inputs = list(node.inputs())
168
+ for i in [1, 2]: # dtype can be the second or third argument to aten::to()
169
+ if inputs[i].node()["value"] == 5:
170
+ inputs[i].node().copyAttributes(float_node)
171
+
172
+ model.apply(patch_float)
173
+ patch_float(model.encode_image)
174
+ patch_float(model.encode_text)
175
+
176
+ model.float()
177
+
178
+ return model, _transform(model.input_resolution.item())
179
+
180
+
181
+ def tokenize(texts: Union[str, List[str]], context_length: int = 77, truncate: bool = False) -> torch.LongTensor:
182
+ """
183
+ Returns the tokenized representation of given input string(s)
184
+ Parameters
185
+ ----------
186
+ texts : Union[str, List[str]]
187
+ An input string or a list of input strings to tokenize
188
+ context_length : int
189
+ The context length to use; all CLIP models use 77 as the context length
190
+ truncate: bool
191
+ Whether to truncate the text in case its encoding is longer than the context length
192
+ Returns
193
+ -------
194
+ A two-dimensional tensor containing the resulting tokens, shape = [number of input strings, context_length]
195
+ """
196
+ if isinstance(texts, str):
197
+ texts = [texts]
198
+
199
+ sot_token = _tokenizer.encoder["<|startoftext|>"]
200
+ eot_token = _tokenizer.encoder["<|endoftext|>"]
201
+ all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
202
+ result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
203
+
204
+ for i, tokens in enumerate(all_tokens):
205
+ if len(tokens) > context_length:
206
+ if truncate:
207
+ tokens = tokens[:context_length]
208
+ tokens[-1] = eot_token
209
+ else:
210
+ raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
211
+ result[i, :len(tokens)] = torch.tensor(tokens)
212
+
213
+ return result
CLIP/clip_old.py ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import hashlib
2
+ import os
3
+ import urllib
4
+ import warnings
5
+ from typing import Union, List
6
+
7
+ import torch
8
+ from PIL import Image
9
+ from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize
10
+ from tqdm import tqdm
11
+
12
+ from CLIP.model import build_model
13
+ from CLIP.simple_tokenizer import SimpleTokenizer as _Tokenizer
14
+
15
+ __all__ = ["available_models", "load", "tokenize"]
16
+ _tokenizer = _Tokenizer()
17
+
18
+ _MODELS = {
19
+ "RN50": "https://openaipublic.azureedge.net/clip/models/afeb0e10f9e5a86da6080e35cf09123aca3b358a0c3e3b6c78a7b63bc04b6762/RN50.pt",
20
+ "ViT-B/32": "https://openaipublic.azureedge.net/clip/models/40d365715913c9da98579312b702a82c18be219cc2a73407c4526f58eba950af/ViT-B-32.pt",
21
+ }
22
+
23
+
24
+ def _download(url: str, root: str = os.path.expanduser("~/.cache/clip")):
25
+ os.makedirs(root, exist_ok=True)
26
+ filename = os.path.basename(url)
27
+
28
+ expected_sha256 = url.split("/")[-2]
29
+ download_target = os.path.join(root, filename)
30
+
31
+ if os.path.exists(download_target) and not os.path.isfile(download_target):
32
+ raise RuntimeError(f"{download_target} exists and is not a regular file")
33
+
34
+ if os.path.isfile(download_target):
35
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() == expected_sha256:
36
+ return download_target
37
+ else:
38
+ warnings.warn(f"{download_target} exists, but the SHA256 checksum does not match; re-downloading the file")
39
+
40
+ with urllib.request.urlopen(url) as source, open(download_target, "wb") as output:
41
+ with tqdm(total=int(source.info().get("Content-Length")), ncols=80) as loop:
42
+ while True:
43
+ buffer = source.read(8192)
44
+ if not buffer:
45
+ break
46
+
47
+ output.write(buffer)
48
+ loop.update(len(buffer))
49
+
50
+ if hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
51
+ raise RuntimeError(f"Model has been downloaded but the SHA256 checksum does not not match")
52
+
53
+ return download_target
54
+
55
+
56
+ def available_models():
57
+ return list(_MODELS.keys())
58
+
59
+
60
+ def load(name: str, device: Union[str, torch.device] = "cuda" if torch.cuda.is_available() else "cpu", jit=True):
61
+ if name not in _MODELS:
62
+ raise RuntimeError(f"Model {name} not found; available models = {available_models()}")
63
+
64
+ model_path = _download(_MODELS[name])
65
+ model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval()
66
+ n_px = model.input_resolution.item()
67
+
68
+ transform = Compose([
69
+ Resize(n_px, interpolation=Image.BICUBIC),
70
+ CenterCrop(n_px),
71
+ lambda image: image.convert("RGB"),
72
+ ToTensor(),
73
+ Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)),
74
+ ])
75
+
76
+ if not jit:
77
+ print("get Model.....")
78
+ model = build_model(model.state_dict()).to(device)
79
+ return model, transform
80
+
81
+ # patch the device names
82
+ device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[])
83
+ device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1]
84
+
85
+ def patch_device(module):
86
+ graphs = [module.graph] if hasattr(module, "graph") else []
87
+ if hasattr(module, "forward1"):
88
+ graphs.append(module.forward1.graph)
89
+
90
+ for graph in graphs:
91
+ for node in graph.findAllNodes("prim::Constant"):
92
+ if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"):
93
+ node.copyAttributes(device_node)
94
+
95
+ model.apply(patch_device)
96
+ patch_device(model.encode_image)
97
+ patch_device(model.encode_text)
98
+
99
+ # patch dtype to float32 on CPU
100
+ if device == "cpu":
101
+ float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[])
102
+ float_input = list(float_holder.graph.findNode("aten::to").inputs())[1]
103
+ float_node = float_input.node()
104
+
105
+ def patch_float(module):
106
+ graphs = [module.graph] if hasattr(module, "graph") else []
107
+ if hasattr(module, "forward1"):
108
+ graphs.append(module.forward1.graph)
109
+
110
+ for graph in graphs:
111
+ for node in graph.findAllNodes("aten::to"):
112
+ inputs = list(node.inputs())
113
+ for i in [1, 2]: # dtype can be the second or third argument to aten::to()
114
+ if inputs[i].node()["value"] == 5:
115
+ inputs[i].node().copyAttributes(float_node)
116
+
117
+ model.apply(patch_float)
118
+ patch_float(model.encode_image)
119
+ patch_float(model.encode_text)
120
+
121
+ model.float()
122
+
123
+ return model, transform
124
+
125
+
126
+ def tokenize(texts: Union[str, List[str]], context_length: int = 77):
127
+ if isinstance(texts, str):
128
+ texts = [texts]
129
+
130
+ sot_token = _tokenizer.encoder["<|startoftext|>"]
131
+ eot_token = _tokenizer.encoder["<|endoftext|>"]
132
+ all_tokens = [[sot_token] + _tokenizer.encode(text) + [eot_token] for text in texts]
133
+ result = torch.zeros(len(all_tokens), context_length, dtype=torch.long)
134
+
135
+ for i, tokens in enumerate(all_tokens):
136
+ if len(tokens) > context_length:
137
+ raise RuntimeError(f"Input {texts[i]} is too long for context length {context_length}")
138
+ result[i, :len(tokens)] = torch.tensor(tokens)
139
+
140
+ return result
CLIP/model-card.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card: CLIP
2
+
3
+ Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model.
4
+
5
+ ## Model Details
6
+
7
+ The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within.
8
+
9
+ ### Model Date
10
+
11
+ January 2021
12
+
13
+ ### Model Type
14
+
15
+ The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer.
16
+
17
+ ### Model Version
18
+
19
+ Initially we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32
20
+
21
+ Please see the paper linked below for further details about their specification.
22
+
23
+ ### Documents
24
+
25
+ - [Blog Post](https://openai.com/blog/clip/)
26
+ - [CLIP Paper](https://cdn.openai.com/papers/Learning_Transferable_Visual_Models_From_Natural_Language_Supervision.pdf)
27
+
28
+
29
+
30
+ ## Model Use
31
+
32
+ ### Intended Use
33
+
34
+ The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis.
35
+
36
+ #### Primary intended uses
37
+
38
+ The primary intended users of these models are AI researchers.
39
+
40
+ We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models.
41
+
42
+ ### Out-of-Scope Use Cases
43
+
44
+ **Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful.
45
+
46
+ Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use.
47
+
48
+ Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases.
49
+
50
+
51
+
52
+ ## Data
53
+
54
+ The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users.
55
+
56
+ ### Data Mission Statement
57
+
58
+ Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset.
59
+
60
+
61
+
62
+ ## Performance and Limitations
63
+
64
+ ### Performance
65
+
66
+ We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets:
67
+
68
+ - Food101
69
+ - CIFAR10
70
+ - CIFAR100
71
+ - Birdsnap
72
+ - SUN397
73
+ - Stanford Cars
74
+ - FGVC Aircraft
75
+ - VOC2007
76
+ - DTD
77
+ - Oxford-IIIT Pet dataset
78
+ - Caltech101
79
+ - Flowers102
80
+ - MNIST
81
+ - SVHN
82
+ - IIIT5K
83
+ - Hateful Memes
84
+ - SST-2
85
+ - UCF101
86
+ - Kinetics700
87
+ - Country211
88
+ - CLEVR Counting
89
+ - KITTI Distance
90
+ - STL-10
91
+ - RareAct
92
+ - Flickr30
93
+ - MSCOCO
94
+ - ImageNet
95
+ - ImageNet-A
96
+ - ImageNet-R
97
+ - ImageNet Sketch
98
+ - ObjectNet (ImageNet Overlap)
99
+ - Youtube-BB
100
+ - ImageNet-Vid
101
+
102
+ ## Limitations
103
+
104
+ CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance.
105
+
106
+ ### Bias and Fairness
107
+
108
+ We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper).
109
+
110
+ We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks.
111
+
112
+
113
+
114
+ ## Feedback
115
+
116
+ ### Where to send questions or comments about the model
117
+
118
+ Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9)
CLIP/model.py ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import OrderedDict
2
+ from typing import Tuple, Union
3
+
4
+ import torch
5
+ import torch.nn.functional as F
6
+ from torch import nn
7
+
8
+
9
+ class Bottleneck(nn.Module):
10
+ expansion = 4
11
+
12
+ def __init__(self, inplanes, planes, stride=1):
13
+ super().__init__()
14
+
15
+ # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
16
+ self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
17
+ self.bn1 = nn.BatchNorm2d(planes)
18
+
19
+ self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
20
+ self.bn2 = nn.BatchNorm2d(planes)
21
+
22
+ self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
23
+
24
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
25
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion)
26
+
27
+ self.relu = nn.ReLU(inplace=True)
28
+ self.downsample = None
29
+ self.stride = stride
30
+
31
+ if stride > 1 or inplanes != planes * Bottleneck.expansion:
32
+ # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
33
+ self.downsample = nn.Sequential(OrderedDict([
34
+ ("-1", nn.AvgPool2d(stride)),
35
+ ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
36
+ ("1", nn.BatchNorm2d(planes * self.expansion))
37
+ ]))
38
+
39
+ def forward(self, x: torch.Tensor):
40
+ identity = x
41
+
42
+ out = self.relu(self.bn1(self.conv1(x)))
43
+ out = self.relu(self.bn2(self.conv2(out)))
44
+ out = self.avgpool(out)
45
+ out = self.bn3(self.conv3(out))
46
+
47
+ if self.downsample is not None:
48
+ identity = self.downsample(x)
49
+
50
+ out += identity
51
+ out = self.relu(out)
52
+ return out
53
+
54
+
55
+ class AttentionPool2d(nn.Module):
56
+ def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
57
+ super().__init__()
58
+ self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
59
+ self.k_proj = nn.Linear(embed_dim, embed_dim)
60
+ self.q_proj = nn.Linear(embed_dim, embed_dim)
61
+ self.v_proj = nn.Linear(embed_dim, embed_dim)
62
+ self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
63
+ self.num_heads = num_heads
64
+
65
+ def forward(self, x):
66
+ x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
67
+ x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
68
+ x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
69
+ x, _ = F.multi_head_attention_forward(
70
+ query=x, key=x, value=x,
71
+ embed_dim_to_check=x.shape[-1],
72
+ num_heads=self.num_heads,
73
+ q_proj_weight=self.q_proj.weight,
74
+ k_proj_weight=self.k_proj.weight,
75
+ v_proj_weight=self.v_proj.weight,
76
+ in_proj_weight=None,
77
+ in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
78
+ bias_k=None,
79
+ bias_v=None,
80
+ add_zero_attn=False,
81
+ dropout_p=0,
82
+ out_proj_weight=self.c_proj.weight,
83
+ out_proj_bias=self.c_proj.bias,
84
+ use_separate_proj_weight=True,
85
+ training=self.training,
86
+ need_weights=False
87
+ )
88
+
89
+ return x[0]
90
+
91
+
92
+ class ModifiedResNet(nn.Module):
93
+ """
94
+ A ResNet class that is similar to torchvision's but contains the following changes:
95
+ - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
96
+ - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
97
+ - The final pooling layer is a QKV attention instead of an average pool
98
+ """
99
+
100
+ def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
101
+ super().__init__()
102
+ self.output_dim = output_dim
103
+ self.input_resolution = input_resolution
104
+
105
+ # the 3-layer stem
106
+ self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
107
+ self.bn1 = nn.BatchNorm2d(width // 2)
108
+ self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
109
+ self.bn2 = nn.BatchNorm2d(width // 2)
110
+ self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
111
+ self.bn3 = nn.BatchNorm2d(width)
112
+ self.avgpool = nn.AvgPool2d(2)
113
+ self.relu = nn.ReLU(inplace=True)
114
+
115
+ # residual layers
116
+ self._inplanes = width # this is a *mutable* variable used during construction
117
+ self.layer1 = self._make_layer(width, layers[0])
118
+ self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
119
+ self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
120
+ self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
121
+
122
+ embed_dim = width * 32 # the ResNet feature dimension
123
+ self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
124
+
125
+ def _make_layer(self, planes, blocks, stride=1):
126
+ layers = [Bottleneck(self._inplanes, planes, stride)]
127
+
128
+ self._inplanes = planes * Bottleneck.expansion
129
+ for _ in range(1, blocks):
130
+ layers.append(Bottleneck(self._inplanes, planes))
131
+
132
+ return nn.Sequential(*layers)
133
+
134
+ def forward(self, x):
135
+ def stem(x):
136
+ for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
137
+ x = self.relu(bn(conv(x)))
138
+ x = self.avgpool(x)
139
+ return x
140
+
141
+ x = x.type(self.conv1.weight.dtype)
142
+ x = stem(x)
143
+ x = self.layer1(x)
144
+ x = self.layer2(x)
145
+ x = self.layer3(x)
146
+
147
+
148
+ #x = self.layer4(x)
149
+ #print(x.shape)
150
+ #x = self.attnpool(x)
151
+
152
+ return x
153
+
154
+
155
+ class LayerNorm(nn.LayerNorm):
156
+ """Subclass torch's LayerNorm to handle fp16."""
157
+
158
+ def forward(self, x: torch.Tensor):
159
+ orig_type = x.dtype
160
+ ret = super().forward(x.type(torch.float32))
161
+ return ret.type(orig_type)
162
+
163
+
164
+ class QuickGELU(nn.Module):
165
+ def forward(self, x: torch.Tensor):
166
+ return x * torch.sigmoid(1.702 * x)
167
+
168
+
169
+ class ResidualAttentionBlock(nn.Module):
170
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
171
+ super().__init__()
172
+
173
+ self.attn = nn.MultiheadAttention(d_model, n_head)
174
+ self.ln_1 = LayerNorm(d_model)
175
+ self.mlp = nn.Sequential(OrderedDict([
176
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
177
+ ("gelu", QuickGELU()),
178
+ ("c_proj", nn.Linear(d_model * 4, d_model))
179
+ ]))
180
+ self.ln_2 = LayerNorm(d_model)
181
+ self.attn_mask = attn_mask
182
+
183
+ def attention(self, x: torch.Tensor):
184
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
185
+ return self.attn(x, x, x, need_weights=True, attn_mask=self.attn_mask)
186
+
187
+ def forward(self, x: torch.Tensor):
188
+ attention_res = self.attention(self.ln_1(x))
189
+ x, weight = x+attention_res[0], attention_res[1]
190
+ x = x + self.mlp(self.ln_2(x))
191
+ return x, weight
192
+
193
+ class ResidualAttentionBlock_old(nn.Module):
194
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
195
+ super().__init__()
196
+
197
+ self.attn = nn.MultiheadAttention(d_model, n_head)
198
+ self.ln_1 = LayerNorm(d_model)
199
+ self.mlp = nn.Sequential(OrderedDict([
200
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
201
+ ("gelu", QuickGELU()),
202
+ ("c_proj", nn.Linear(d_model * 4, d_model))
203
+ ]))
204
+ self.ln_2 = LayerNorm(d_model)
205
+ self.attn_mask = attn_mask
206
+
207
+ def attention(self, x: torch.Tensor):
208
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
209
+ return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
210
+
211
+ def forward(self, x: torch.Tensor):
212
+ x = x + self.attention(self.ln_1(x))
213
+ x = x + self.mlp(self.ln_2(x))
214
+ return x
215
+
216
+
217
+ class Transformer(nn.Module):
218
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
219
+ super().__init__()
220
+ self.width = width
221
+ self.layers = layers
222
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
223
+
224
+ def forward(self, x: torch.Tensor):
225
+ weights = []
226
+ r=0
227
+
228
+ for block in self.resblocks:
229
+ #if r<=10:
230
+ # for param in block.parameters():
231
+ # param.requires_grad = False
232
+ #if r%2==0:
233
+
234
+ x, weight = block(x)
235
+ weights.append(weight)
236
+ #print("r=",r)
237
+ #if r==5:
238
+ # break
239
+ #r = r + 1
240
+
241
+ return x, weights
242
+
243
+ ### OLD transformer without attetion
244
+ class Transformer_Ecnoder_clip(nn.Module):
245
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
246
+ super().__init__()
247
+ self.width = width
248
+ self.layers = layers
249
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
250
+
251
+ def forward(self, x: torch.Tensor):
252
+ return self.resblocks(x)
253
+
254
+
255
+ class VisualTransformer(nn.Module):
256
+ def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
257
+ super().__init__()
258
+ self.input_resolution = input_resolution
259
+ self.output_dim = output_dim
260
+ self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
261
+
262
+ scale = width ** -0.5
263
+ self.class_embedding = nn.Parameter(scale * torch.randn(width))
264
+ self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
265
+ self.ln_pre = LayerNorm(width)
266
+
267
+ self.transformer = Transformer(width, layers, heads)
268
+
269
+ self.ln_post = LayerNorm(width)
270
+ self.proj = nn.Parameter(scale * torch.randn(width, 512))
271
+
272
+ def forward(self, x: torch.Tensor):
273
+ x = self.conv1(x) # shape = [*, width, grid, grid]
274
+ x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
275
+ x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
276
+ x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
277
+
278
+
279
+ x = x + self.positional_embedding.to(x.dtype)
280
+ x = self.ln_pre(x)
281
+
282
+ x = x.permute(1, 0, 2) # NLD -> LND
283
+ x,weight = self.transformer(x)
284
+ x = x.permute(1, 0, 2) # LND -> NLD
285
+ #hide_feat=x
286
+ #x = self.ln_post(x[:, 0, :])
287
+ #x=self.ln_post(x)
288
+ if self.proj is not None:
289
+ hide_feat=self.ln_post(x) @ self.proj
290
+ x = self.ln_post(x[:, 0, :]) @ self.proj
291
+ #print(hide_feat.shape)
292
+
293
+ return x,weight,hide_feat
294
+
295
+
296
+ class CLIP(nn.Module):
297
+ def __init__(self,
298
+ embed_dim: int,
299
+ # vision
300
+ image_resolution: int,
301
+ vision_layers: Union[Tuple[int, int, int, int], int],
302
+ vision_width: int,
303
+ vision_patch_size: int,
304
+ # text
305
+ context_length: int,
306
+ vocab_size: int,
307
+ transformer_width: int,
308
+ transformer_heads: int,
309
+ transformer_layers: int
310
+ ):
311
+ super().__init__()
312
+
313
+ self.context_length = context_length
314
+
315
+ if isinstance(vision_layers, (tuple, list)):
316
+ vision_heads = vision_width * 32 // 64
317
+ self.visual = ModifiedResNet(
318
+ layers=vision_layers,
319
+ output_dim=embed_dim,
320
+ heads=vision_heads,
321
+ input_resolution=image_resolution,
322
+ width=vision_width
323
+ )
324
+ else:
325
+ vision_heads = vision_width // 64
326
+ self.visual = VisualTransformer(
327
+ input_resolution=image_resolution,
328
+ patch_size=vision_patch_size,
329
+ width=vision_width,
330
+ layers=vision_layers,
331
+ heads=vision_heads,
332
+ output_dim=embed_dim
333
+ )
334
+
335
+ self.transformer = Transformer(
336
+ width=transformer_width,
337
+ layers=transformer_layers,
338
+ heads=transformer_heads,
339
+ attn_mask=self.build_attention_mask()
340
+ )
341
+
342
+ self.vocab_size = vocab_size
343
+ self.token_embedding = nn.Embedding(vocab_size, transformer_width)
344
+ self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
345
+ self.ln_final = LayerNorm(transformer_width)
346
+
347
+ self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
348
+ self.logit_scale = nn.Parameter(torch.ones([]))
349
+
350
+ def build_attention_mask(self):
351
+ # lazily create causal attention mask, with full attention between the vision tokens
352
+ # pytorch uses additive attention mask; fill with -inf
353
+ mask = torch.empty(self.context_length, self.context_length)
354
+ mask.fill_(float("-inf"))
355
+ mask.triu_(1) # zero out the lower diagonal
356
+ return mask
357
+
358
+ @property
359
+ def dtype(self):
360
+ return self.visual.conv1.weight.dtype
361
+
362
+ def encode_image(self, image):
363
+ return self.visual(image.type(self.dtype))
364
+
365
+ def encode_text(self, text):
366
+ x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
367
+
368
+ x = x + self.positional_embedding.type(self.dtype)
369
+ x = x.permute(1, 0, 2) # NLD -> LND
370
+ x,weight = self.transformer(x)
371
+ x = x.permute(1, 0, 2) # LND -> NLD
372
+ x = self.ln_final(x).type(self.dtype)
373
+
374
+ # x.shape = [batch_size, n_ctx, transformer.width]
375
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
376
+ hide_feat=x
377
+ x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
378
+
379
+ return x,weight,hide_feat
380
+
381
+ def forward(self, image, text):
382
+ image_features,weight_image,hide_image = self.encode_image(image)
383
+ text_features,weight_text,hide_text = self.encode_text(text)
384
+
385
+ # normalized features
386
+ image_features = image_features / image_features.norm(dim=-1, keepdim=True)
387
+ text_features = text_features / text_features.norm(dim=-1, keepdim=True)
388
+
389
+ # cosine similarity as logits
390
+ logit_scale = self.logit_scale.exp()
391
+ logits_per_iamge = logit_scale * image_features @ text_features.t()
392
+ logits_per_text = logit_scale * text_features @ image_features.t()
393
+
394
+
395
+
396
+
397
+ # shape = [global_batch_size, global_batch_size]
398
+ #return image_features, text_features logits_per_iamge, logits_per_text,hide_image,hide_text
399
+ return image_features, text_features,hide_image,hide_text
400
+
401
+ def convert_weights(model: nn.Module):
402
+ """Convert applicable model parameters to fp16"""
403
+
404
+ def _convert_weights_to_fp16(l):
405
+ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
406
+ l.weight.data = l.weight.data.half()
407
+ if l.bias is not None:
408
+ l.bias.data = l.bias.data.half()
409
+
410
+ if isinstance(l, nn.MultiheadAttention):
411
+ for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
412
+ tensor = getattr(l, attr)
413
+ if tensor is not None:
414
+ tensor.data = tensor.data.half()
415
+
416
+ for name in ["text_projection", "proj"]:
417
+ if hasattr(l, name):
418
+ attr = getattr(l, name)
419
+ if attr is not None:
420
+ attr.data = attr.data.half()
421
+
422
+ model.apply(_convert_weights_to_fp16)
423
+
424
+
425
+ def build_model(state_dict: dict):
426
+ vit = "visual.proj" in state_dict
427
+
428
+ if vit:
429
+ vision_width = state_dict["visual.conv1.weight"].shape[0]
430
+ vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
431
+ vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
432
+ grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
433
+ image_resolution = vision_patch_size * grid_size
434
+ else:
435
+ counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
436
+ vision_layers = tuple(counts)
437
+ vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
438
+ output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
439
+ vision_patch_size = None
440
+ assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
441
+ image_resolution = output_width * 32
442
+
443
+ embed_dim = state_dict["text_projection"].shape[1]
444
+ context_length = state_dict["positional_embedding"].shape[0]
445
+ vocab_size = state_dict["token_embedding.weight"].shape[0]
446
+ transformer_width = state_dict["ln_final.weight"].shape[0]
447
+ transformer_heads = transformer_width // 64
448
+ transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
449
+
450
+ model = CLIP(
451
+ embed_dim,
452
+ image_resolution, vision_layers, vision_width, vision_patch_size,
453
+ context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
454
+ )
455
+
456
+ for key in ["input_resolution", "context_length", "vocab_size"]:
457
+ del state_dict[key]
458
+
459
+ convert_weights(model)
460
+ model.load_state_dict(state_dict)
461
+ return model.eval()
CLIP/model_moe.py ADDED
@@ -0,0 +1,498 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from collections import OrderedDict
2
+ from typing import Tuple, Union
3
+
4
+ import torch
5
+ import torch.nn.functional as F
6
+ from torch import nn
7
+ from mixture_of_experts import MoE
8
+
9
+
10
+ class Bottleneck(nn.Module):
11
+ expansion = 4
12
+
13
+ def __init__(self, inplanes, planes, stride=1):
14
+ super().__init__()
15
+
16
+ # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1
17
+ self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False)
18
+ self.bn1 = nn.BatchNorm2d(planes)
19
+
20
+ self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False)
21
+ self.bn2 = nn.BatchNorm2d(planes)
22
+
23
+ self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity()
24
+
25
+ self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False)
26
+ self.bn3 = nn.BatchNorm2d(planes * self.expansion)
27
+
28
+ self.relu = nn.ReLU(inplace=True)
29
+ self.downsample = None
30
+ self.stride = stride
31
+
32
+ if stride > 1 or inplanes != planes * Bottleneck.expansion:
33
+ # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1
34
+ self.downsample = nn.Sequential(OrderedDict([
35
+ ("-1", nn.AvgPool2d(stride)),
36
+ ("0", nn.Conv2d(inplanes, planes * self.expansion, 1, stride=1, bias=False)),
37
+ ("1", nn.BatchNorm2d(planes * self.expansion))
38
+ ]))
39
+
40
+ def forward(self, x: torch.Tensor):
41
+ identity = x
42
+
43
+ out = self.relu(self.bn1(self.conv1(x)))
44
+ out = self.relu(self.bn2(self.conv2(out)))
45
+ out = self.avgpool(out)
46
+ out = self.bn3(self.conv3(out))
47
+
48
+ if self.downsample is not None:
49
+ identity = self.downsample(x)
50
+
51
+ out += identity
52
+ out = self.relu(out)
53
+ return out
54
+
55
+
56
+ class AttentionPool2d(nn.Module):
57
+ def __init__(self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None):
58
+ super().__init__()
59
+ self.positional_embedding = nn.Parameter(torch.randn(spacial_dim ** 2 + 1, embed_dim) / embed_dim ** 0.5)
60
+ self.k_proj = nn.Linear(embed_dim, embed_dim)
61
+ self.q_proj = nn.Linear(embed_dim, embed_dim)
62
+ self.v_proj = nn.Linear(embed_dim, embed_dim)
63
+ self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim)
64
+ self.num_heads = num_heads
65
+
66
+ def forward(self, x):
67
+ x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute(2, 0, 1) # NCHW -> (HW)NC
68
+ x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC
69
+ x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC
70
+ x, _ = F.multi_head_attention_forward(
71
+ query=x, key=x, value=x,
72
+ embed_dim_to_check=x.shape[-1],
73
+ num_heads=self.num_heads,
74
+ q_proj_weight=self.q_proj.weight,
75
+ k_proj_weight=self.k_proj.weight,
76
+ v_proj_weight=self.v_proj.weight,
77
+ in_proj_weight=None,
78
+ in_proj_bias=torch.cat([self.q_proj.bias, self.k_proj.bias, self.v_proj.bias]),
79
+ bias_k=None,
80
+ bias_v=None,
81
+ add_zero_attn=False,
82
+ dropout_p=0,
83
+ out_proj_weight=self.c_proj.weight,
84
+ out_proj_bias=self.c_proj.bias,
85
+ use_separate_proj_weight=True,
86
+ training=self.training,
87
+ need_weights=False
88
+ )
89
+
90
+ return x[0]
91
+
92
+
93
+ class ModifiedResNet(nn.Module):
94
+ """
95
+ A ResNet class that is similar to torchvision's but contains the following changes:
96
+ - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool.
97
+ - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1
98
+ - The final pooling layer is a QKV attention instead of an average pool
99
+ """
100
+
101
+ def __init__(self, layers, output_dim, heads, input_resolution=224, width=64):
102
+ super().__init__()
103
+ self.output_dim = output_dim
104
+ self.input_resolution = input_resolution
105
+
106
+ # the 3-layer stem
107
+ self.conv1 = nn.Conv2d(3, width // 2, kernel_size=3, stride=2, padding=1, bias=False)
108
+ self.bn1 = nn.BatchNorm2d(width // 2)
109
+ self.conv2 = nn.Conv2d(width // 2, width // 2, kernel_size=3, padding=1, bias=False)
110
+ self.bn2 = nn.BatchNorm2d(width // 2)
111
+ self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False)
112
+ self.bn3 = nn.BatchNorm2d(width)
113
+ self.avgpool = nn.AvgPool2d(2)
114
+ self.relu = nn.ReLU(inplace=True)
115
+
116
+ # residual layers
117
+ self._inplanes = width # this is a *mutable* variable used during construction
118
+ self.layer1 = self._make_layer(width, layers[0])
119
+ self.layer2 = self._make_layer(width * 2, layers[1], stride=2)
120
+ self.layer3 = self._make_layer(width * 4, layers[2], stride=2)
121
+ self.layer4 = self._make_layer(width * 8, layers[3], stride=2)
122
+
123
+ embed_dim = width * 32 # the ResNet feature dimension
124
+ self.attnpool = AttentionPool2d(input_resolution // 32, embed_dim, heads, output_dim)
125
+
126
+ def _make_layer(self, planes, blocks, stride=1):
127
+ layers = [Bottleneck(self._inplanes, planes, stride)]
128
+
129
+ self._inplanes = planes * Bottleneck.expansion
130
+ for _ in range(1, blocks):
131
+ layers.append(Bottleneck(self._inplanes, planes))
132
+
133
+ return nn.Sequential(*layers)
134
+
135
+ def forward(self, x):
136
+ def stem(x):
137
+ for conv, bn in [(self.conv1, self.bn1), (self.conv2, self.bn2), (self.conv3, self.bn3)]:
138
+ x = self.relu(bn(conv(x)))
139
+ x = self.avgpool(x)
140
+ return x
141
+
142
+ x = x.type(self.conv1.weight.dtype)
143
+ x = stem(x)
144
+ x = self.layer1(x)
145
+ x = self.layer2(x)
146
+ x = self.layer3(x)
147
+
148
+
149
+ #x = self.layer4(x)
150
+ #print(x.shape)
151
+ #x = self.attnpool(x)
152
+
153
+ return x
154
+
155
+
156
+ class LayerNorm(nn.LayerNorm):
157
+ """Subclass torch's LayerNorm to handle fp16."""
158
+
159
+ def forward(self, x: torch.Tensor):
160
+ orig_type = x.dtype
161
+ ret = super().forward(x.type(torch.float32))
162
+ return ret.type(orig_type)
163
+
164
+
165
+ class QuickGELU(nn.Module):
166
+ def forward(self, x: torch.Tensor):
167
+ return x * torch.sigmoid(1.702 * x)
168
+
169
+
170
+ class ResidualAttentionBlock(nn.Module):
171
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
172
+ super().__init__()
173
+
174
+ self.attn = nn.MultiheadAttention(d_model, n_head)
175
+ self.ln_1 = LayerNorm(d_model)
176
+ self.mlp = nn.Sequential(OrderedDict([
177
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
178
+ ("gelu", QuickGELU()),
179
+ ("c_proj", nn.Linear(d_model * 4, d_model))
180
+ ]))
181
+ self.ln_2 = LayerNorm(d_model)
182
+ self.attn_mask = attn_mask
183
+
184
+ def attention(self, x: torch.Tensor):
185
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
186
+ return self.attn(x, x, x, need_weights=True, attn_mask=self.attn_mask)
187
+
188
+ def forward(self, x: torch.Tensor):
189
+ attention_res = self.attention(self.ln_1(x))
190
+ x, weight = x+attention_res[0], attention_res[1]
191
+ x = x + self.mlp(self.ln_2(x))
192
+ return x, weight
193
+
194
+
195
+ class ResidualAttentionBlock_MOE(nn.Module):
196
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
197
+ super().__init__()
198
+
199
+ self.attn = nn.MultiheadAttention(d_model, n_head)
200
+ self.ln_1 = LayerNorm(d_model)
201
+ self.mlp = moe = MoE(
202
+ dim = 512,
203
+ num_experts = 16, # increase the experts (# parameters) of your model without increasing computation
204
+ hidden_dim = 512 * 4, # size of hidden dimension in each expert, defaults to 4 * dimension
205
+ activation = nn.LeakyReLU, # use your preferred activation, will default to GELU
206
+ second_policy_train = 'random', # in top_2 gating, policy for whether to use a second-place expert
207
+ second_policy_eval = 'random', # all (always) | none (never) | threshold (if gate value > the given threshold) | random (if gate value > threshold * random_uniform(0, 1))
208
+ second_threshold_train = 0.2,
209
+ second_threshold_eval = 0.2,
210
+ capacity_factor_train = 1.25, # experts have fixed capacity per batch. we need some extra capacity in case gating is not perfectly balanced.
211
+ capacity_factor_eval = 2., # capacity_factor_* should be set to a value >=1
212
+ loss_coef = 1e-2 # multiplier on the auxiliary expert balancing auxiliary loss
213
+ )
214
+
215
+ self.ln_2 = LayerNorm(d_model)
216
+ self.attn_mask = attn_mask
217
+
218
+ def attention(self, x: torch.Tensor):
219
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
220
+ return self.attn(x, x, x, need_weights=True, attn_mask=self.attn_mask)
221
+
222
+ def forward(self, x: torch.Tensor):
223
+ attention_res = self.attention(self.ln_1(x))
224
+ x, weight = x+attention_res[0], attention_res[1]
225
+ x = x + self.mlp(self.ln_2(x))
226
+ return x, weight
227
+
228
+
229
+
230
+ class ResidualAttentionBlock_old(nn.Module):
231
+ def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
232
+ super().__init__()
233
+
234
+ self.attn = nn.MultiheadAttention(d_model, n_head)
235
+ self.ln_1 = LayerNorm(d_model)
236
+ self.mlp = nn.Sequential(OrderedDict([
237
+ ("c_fc", nn.Linear(d_model, d_model * 4)),
238
+ ("gelu", QuickGELU()),
239
+ ("c_proj", nn.Linear(d_model * 4, d_model))
240
+ ]))
241
+ self.ln_2 = LayerNorm(d_model)
242
+ self.attn_mask = attn_mask
243
+
244
+ def attention(self, x: torch.Tensor):
245
+ self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None
246
+ return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
247
+
248
+ def forward(self, x: torch.Tensor):
249
+ x = x + self.attention(self.ln_1(x))
250
+ x = x + self.mlp(self.ln_2(x))
251
+ return x
252
+
253
+
254
+ class Transformer(nn.Module):
255
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
256
+ super().__init__()
257
+ self.width = width
258
+ self.layers = layers
259
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
260
+
261
+ def forward(self, x: torch.Tensor):
262
+ weights = []
263
+ r=0
264
+
265
+ for block in self.resblocks:
266
+ #if r<=10:
267
+ # for param in block.parameters():
268
+ # param.requires_grad = False
269
+ #if r%2==0:
270
+
271
+ x, weight = block(x)
272
+ weights.append(weight)
273
+ #print("r=",r)
274
+ #if r==5:
275
+ # break
276
+ #r = r + 1
277
+
278
+ return x, weights
279
+
280
+ ### OLD transformer without attetion
281
+ class Transformer_Ecnoder_clip(nn.Module):
282
+ def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None):
283
+ super().__init__()
284
+ self.width = width
285
+ self.layers = layers
286
+ self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)])
287
+
288
+ def forward(self, x: torch.Tensor):
289
+ return self.resblocks(x)
290
+
291
+
292
+ class VisualTransformer(nn.Module):
293
+ def __init__(self, input_resolution: int, patch_size: int, width: int, layers: int, heads: int, output_dim: int):
294
+ super().__init__()
295
+ self.input_resolution = input_resolution
296
+ self.output_dim = output_dim
297
+ self.conv1 = nn.Conv2d(in_channels=3, out_channels=width, kernel_size=patch_size, stride=patch_size, bias=False)
298
+
299
+ scale = width ** -0.5
300
+ self.class_embedding = nn.Parameter(scale * torch.randn(width))
301
+ self.positional_embedding = nn.Parameter(scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width))
302
+ self.ln_pre = LayerNorm(width)
303
+
304
+ self.transformer = Transformer(width, layers, heads)
305
+
306
+ self.ln_post = LayerNorm(width)
307
+ self.proj = nn.Parameter(scale * torch.randn(width, 512))
308
+
309
+ def forward(self, x: torch.Tensor):
310
+ x = self.conv1(x) # shape = [*, width, grid, grid]
311
+ x = x.reshape(x.shape[0], x.shape[1], -1) # shape = [*, width, grid ** 2]
312
+ x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width]
313
+ x = torch.cat([self.class_embedding.to(x.dtype) + torch.zeros(x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device), x], dim=1) # shape = [*, grid ** 2 + 1, width]
314
+
315
+
316
+ x = x + self.positional_embedding.to(x.dtype)
317
+ x = self.ln_pre(x)
318
+
319
+ x = x.permute(1, 0, 2) # NLD -> LND
320
+ x,weight = self.transformer(x)
321
+ x = x.permute(1, 0, 2) # LND -> NLD
322
+ #hide_feat=x
323
+ #x = self.ln_post(x[:, 0, :])
324
+ #x=self.ln_post(x)
325
+ if self.proj is not None:
326
+ hide_feat=self.ln_post(x) @ self.proj
327
+ x = self.ln_post(x[:, 0, :]) @ self.proj
328
+ #print(hide_feat.shape)
329
+
330
+ return x,weight,hide_feat
331
+
332
+
333
+ class CLIP(nn.Module):
334
+ def __init__(self,
335
+ embed_dim: int,
336
+ # vision
337
+ image_resolution: int,
338
+ vision_layers: Union[Tuple[int, int, int, int], int],
339
+ vision_width: int,
340
+ vision_patch_size: int,
341
+ # text
342
+ context_length: int,
343
+ vocab_size: int,
344
+ transformer_width: int,
345
+ transformer_heads: int,
346
+ transformer_layers: int
347
+ ):
348
+ super().__init__()
349
+
350
+ self.context_length = context_length
351
+
352
+ if isinstance(vision_layers, (tuple, list)):
353
+ vision_heads = vision_width * 32 // 64
354
+ self.visual = ModifiedResNet(
355
+ layers=vision_layers,
356
+ output_dim=embed_dim,
357
+ heads=vision_heads,
358
+ input_resolution=image_resolution,
359
+ width=vision_width
360
+ )
361
+ else:
362
+ vision_heads = vision_width // 64
363
+ self.visual = VisualTransformer(
364
+ input_resolution=image_resolution,
365
+ patch_size=vision_patch_size,
366
+ width=vision_width,
367
+ layers=vision_layers,
368
+ heads=vision_heads,
369
+ output_dim=embed_dim
370
+ )
371
+
372
+ self.transformer = Transformer(
373
+ width=transformer_width,
374
+ layers=transformer_layers,
375
+ heads=transformer_heads,
376
+ attn_mask=self.build_attention_mask()
377
+ )
378
+
379
+ self.vocab_size = vocab_size
380
+ self.token_embedding = nn.Embedding(vocab_size, transformer_width)
381
+ self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width))
382
+ self.ln_final = LayerNorm(transformer_width)
383
+
384
+ self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim))
385
+ self.logit_scale = nn.Parameter(torch.ones([]))
386
+
387
+ def build_attention_mask(self):
388
+ # lazily create causal attention mask, with full attention between the vision tokens
389
+ # pytorch uses additive attention mask; fill with -inf
390
+ mask = torch.empty(self.context_length, self.context_length)
391
+ mask.fill_(float("-inf"))
392
+ mask.triu_(1) # zero out the lower diagonal
393
+ return mask
394
+
395
+ @property
396
+ def dtype(self):
397
+ return self.visual.conv1.weight.dtype
398
+
399
+ def encode_image(self, image):
400
+ return self.visual(image.type(self.dtype))
401
+
402
+ def encode_text(self, text):
403
+ x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model]
404
+
405
+ x = x + self.positional_embedding.type(self.dtype)
406
+ x = x.permute(1, 0, 2) # NLD -> LND
407
+ x,weight = self.transformer(x)
408
+ x = x.permute(1, 0, 2) # LND -> NLD
409
+ x = self.ln_final(x).type(self.dtype)
410
+
411
+ # x.shape = [batch_size, n_ctx, transformer.width]
412
+ # take features from the eot embedding (eot_token is the highest number in each sequence)
413
+ hide_feat=x
414
+ x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection
415
+
416
+ return x,weight,hide_feat
417
+
418
+ def forward(self, image, text):
419
+ image_features,weight_image,hide_image = self.encode_image(image)
420
+ text_features,weight_text,hide_text = self.encode_text(text)
421
+
422
+ # normalized features
423
+ image_features = image_features / image_features.norm(dim=-1, keepdim=True)
424
+ text_features = text_features / text_features.norm(dim=-1, keepdim=True)
425
+
426
+ # cosine similarity as logits
427
+ logit_scale = self.logit_scale.exp()
428
+ logits_per_iamge = logit_scale * image_features @ text_features.t()
429
+ logits_per_text = logit_scale * text_features @ image_features.t()
430
+
431
+
432
+
433
+
434
+ # shape = [global_batch_size, global_batch_size]
435
+ #return image_features, text_features logits_per_iamge, logits_per_text,hide_image,hide_text
436
+ return image_features, text_features,hide_image,hide_text
437
+
438
+ def convert_weights(model: nn.Module):
439
+ """Convert applicable model parameters to fp16"""
440
+
441
+ def _convert_weights_to_fp16(l):
442
+ if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)):
443
+ l.weight.data = l.weight.data.half()
444
+ if l.bias is not None:
445
+ l.bias.data = l.bias.data.half()
446
+
447
+ if isinstance(l, nn.MultiheadAttention):
448
+ for attr in [*[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], "in_proj_bias", "bias_k", "bias_v"]:
449
+ tensor = getattr(l, attr)
450
+ if tensor is not None:
451
+ tensor.data = tensor.data.half()
452
+
453
+ for name in ["text_projection", "proj"]:
454
+ if hasattr(l, name):
455
+ attr = getattr(l, name)
456
+ if attr is not None:
457
+ attr.data = attr.data.half()
458
+
459
+ model.apply(_convert_weights_to_fp16)
460
+
461
+
462
+ def build_model(state_dict: dict):
463
+ vit = "visual.proj" in state_dict
464
+
465
+ if vit:
466
+ vision_width = state_dict["visual.conv1.weight"].shape[0]
467
+ vision_layers = len([k for k in state_dict.keys() if k.startswith("visual.") and k.endswith(".attn.in_proj_weight")])
468
+ vision_patch_size = state_dict["visual.conv1.weight"].shape[-1]
469
+ grid_size = round((state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5)
470
+ image_resolution = vision_patch_size * grid_size
471
+ else:
472
+ counts: list = [len(set(k.split(".")[2] for k in state_dict if k.startswith(f"visual.layer{b}"))) for b in [1, 2, 3, 4]]
473
+ vision_layers = tuple(counts)
474
+ vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0]
475
+ output_width = round((state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5)
476
+ vision_patch_size = None
477
+ assert output_width ** 2 + 1 == state_dict["visual.attnpool.positional_embedding"].shape[0]
478
+ image_resolution = output_width * 32
479
+
480
+ embed_dim = state_dict["text_projection"].shape[1]
481
+ context_length = state_dict["positional_embedding"].shape[0]
482
+ vocab_size = state_dict["token_embedding.weight"].shape[0]
483
+ transformer_width = state_dict["ln_final.weight"].shape[0]
484
+ transformer_heads = transformer_width // 64
485
+ transformer_layers = len(set(k.split(".")[2] for k in state_dict if k.startswith(f"transformer.resblocks")))
486
+
487
+ model = CLIP(
488
+ embed_dim,
489
+ image_resolution, vision_layers, vision_width, vision_patch_size,
490
+ context_length, vocab_size, transformer_width, transformer_heads, transformer_layers
491
+ )
492
+
493
+ for key in ["input_resolution", "context_length", "vocab_size"]:
494
+ del state_dict[key]
495
+
496
+ convert_weights(model)
497
+ model.load_state_dict(state_dict)
498
+ return model.eval()
CLIP/simple_tokenizer.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gzip
2
+ import html
3
+ import os
4
+ from functools import lru_cache
5
+
6
+ import ftfy
7
+ import regex as re
8
+
9
+
10
+ @lru_cache()
11
+ def default_bpe():
12
+ return os.path.join(os.path.dirname(os.path.abspath(__file__)), "bpe_simple_vocab_16e6.txt.gz")
13
+
14
+
15
+ @lru_cache()
16
+ def bytes_to_unicode():
17
+ """
18
+ Returns list of utf-8 byte and a corresponding list of unicode strings.
19
+ The reversible bpe codes work on unicode strings.
20
+ This means you need a large # of unicode characters in your vocab if you want to avoid UNKs.
21
+ When you're at something like a 10B token dataset you end up needing around 5K for decent coverage.
22
+ This is a signficant percentage of your normal, say, 32K bpe vocab.
23
+ To avoid that, we want lookup tables between utf-8 bytes and unicode strings.
24
+ And avoids mapping to whitespace/control characters the bpe code barfs on.
25
+ """
26
+ bs = list(range(ord("!"), ord("~")+1))+list(range(ord("¡"), ord("¬")+1))+list(range(ord("®"), ord("ÿ")+1))
27
+ cs = bs[:]
28
+ n = 0
29
+ for b in range(2**8):
30
+ if b not in bs:
31
+ bs.append(b)
32
+ cs.append(2**8+n)
33
+ n += 1
34
+ cs = [chr(n) for n in cs]
35
+ return dict(zip(bs, cs))
36
+
37
+
38
+ def get_pairs(word):
39
+ """Return set of symbol pairs in a word.
40
+ Word is represented as tuple of symbols (symbols being variable-length strings).
41
+ """
42
+ pairs = set()
43
+ prev_char = word[0]
44
+ for char in word[1:]:
45
+ pairs.add((prev_char, char))
46
+ prev_char = char
47
+ return pairs
48
+
49
+
50
+ def basic_clean(text):
51
+ text = ftfy.fix_text(text)
52
+ text = html.unescape(html.unescape(text))
53
+ return text.strip()
54
+
55
+
56
+ def whitespace_clean(text):
57
+ text = re.sub(r'\s+', ' ', text)
58
+ text = text.strip()
59
+ return text
60
+
61
+
62
+ class SimpleTokenizer(object):
63
+ def __init__(self, bpe_path: str = default_bpe()):
64
+ self.byte_encoder = bytes_to_unicode()
65
+ self.byte_decoder = {v: k for k, v in self.byte_encoder.items()}
66
+ merges = gzip.open(bpe_path).read().decode("utf-8").split('\n')
67
+ merges = merges[1:49152-256-2+1]
68
+ merges = [tuple(merge.split()) for merge in merges]
69
+ vocab = list(bytes_to_unicode().values())
70
+ vocab = vocab + [v+'</w>' for v in vocab]
71
+ for merge in merges:
72
+ vocab.append(''.join(merge))
73
+ vocab.extend(['<|startoftext|>', '<|endoftext|>'])
74
+ self.encoder = dict(zip(vocab, range(len(vocab))))
75
+ self.decoder = {v: k for k, v in self.encoder.items()}
76
+ self.bpe_ranks = dict(zip(merges, range(len(merges))))
77
+ self.cache = {'<|startoftext|>': '<|startoftext|>', '<|endoftext|>': '<|endoftext|>'}
78
+ self.pat = re.compile(r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", re.IGNORECASE)
79
+
80
+ def bpe(self, token):
81
+ if token in self.cache:
82
+ return self.cache[token]
83
+ word = tuple(token[:-1]) + ( token[-1] + '</w>',)
84
+ pairs = get_pairs(word)
85
+
86
+ if not pairs:
87
+ return token+'</w>'
88
+
89
+ while True:
90
+ bigram = min(pairs, key = lambda pair: self.bpe_ranks.get(pair, float('inf')))
91
+ if bigram not in self.bpe_ranks:
92
+ break
93
+ first, second = bigram
94
+ new_word = []
95
+ i = 0
96
+ while i < len(word):
97
+ try:
98
+ j = word.index(first, i)
99
+ new_word.extend(word[i:j])
100
+ i = j
101
+ except:
102
+ new_word.extend(word[i:])
103
+ break
104
+
105
+ if word[i] == first and i < len(word)-1 and word[i+1] == second:
106
+ new_word.append(first+second)
107
+ i += 2
108
+ else:
109
+ new_word.append(word[i])
110
+ i += 1
111
+ new_word = tuple(new_word)
112
+ word = new_word
113
+ if len(word) == 1:
114
+ break
115
+ else:
116
+ pairs = get_pairs(word)
117
+ word = ' '.join(word)
118
+ self.cache[token] = word
119
+ return word
120
+
121
+ def encode(self, text):
122
+ bpe_tokens = []
123
+ text = whitespace_clean(basic_clean(text)).lower()
124
+ for token in re.findall(self.pat, text):
125
+ token = ''.join(self.byte_encoder[b] for b in token.encode('utf-8'))
126
+ bpe_tokens.extend(self.encoder[bpe_token] for bpe_token in self.bpe(token).split(' '))
127
+ return bpe_tokens
128
+
129
+ def decode(self, tokens):
130
+ text = ''.join([self.decoder[token] for token in tokens])
131
+ text = bytearray([self.byte_decoder[c] for c in text]).decode('utf-8', errors="replace").replace('</w>', ' ')
132
+ return text