File size: 18,414 Bytes
6a3ad5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
555f7ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a3ad5b
 
555f7ba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a3ad5b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b98d24d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6a3ad5b
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
---
language:
- en
license: mit
tags:
- clip
- vision
---

# CLIP Variants

_The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within._

See the original [CLIP Model Card][clip-model-card] for more details on limitations and biases.

This repository holds [OpenAI's CLIP][clip] models converted into many other variants, see below for more details.

## Disclaimer & License

I haven't done many tests on these conversions. I've briefly tried the float16 versions, which seem very similar to the original float32, however the similarity seems to drop more with the qint8/quint8 versions as expected. I couldn't try qint8 as it seemed unsupported for some operations, but I'm including it for completeness. From a brief test the quint8 version seemed to work fine.

The license for the conversion code is MIT, the license for the models is the same as the original license for the OpenAI models (πŸ€·β€β™‚οΈ). I have no affiliation with OpenAI.

## Acknowledgements
* [OpenAI CLIP][clip]
* [OpenAI CLIP JavaScript by josephrocca](https://github.com/josephrocca/openai-clip-js)
* [CLIP-ONNX by Lednik7](https://github.com/Lednik7/CLIP-ONNX)
* [Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime](https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html)
* [imgbeddings by minimaxir](https://github.com/minimaxir/imgbeddings)
* ... probably more

## Example

See [example.py](./example.py)

```
❯ python .\example.py
Loading visual model: models/clip-vit-base-patch32-visual-float16.onnx
Visual inference ready, input size 224, type tensor(float16)
Images shape: (2, 3, 224, 224)
Embeddings shape: (2, 512)

Loading textual model: models/clip-vit-base-patch32-textual-float16.onnx
Textual inference ready, input size 77, type tensor(int32)
Texts shape: (14, 77)
Embeddings shape: (14, 512)

flowers.jpg
  similarity  bar chart    text
------------  -----------  ---------------------------------------------------------------
    0.294922  >>>>>>>>     a close up photo of a cherry blossom
    0.267578  >>>>>>>>     cherry blossom
    0.249878  >>>>>>>      flowers
    0.242554  >>>>>>>      a photo taken on a bright and sunny day
    0.228882  >>>>>>       bees
    0.222778  >>>>>>       plant
    0.216187  >>>>>>       a photo taken on a dark and cloudy day
    0.201538  >>>>>>       ruhrgebiet
    0.196655  >>>>>        processing plant
    0.192139  >>>>>        a photo taken at midnight
    0.18689   >>>>>        industry
    0.177856  >>>>>        cars
    0.176636  >>>>>        dogs and cats
    0.111267  >>>          a large industrial plant with many pipes, walkways and railings

heavy-industry.jpg
  similarity  bar chart    text
------------  -----------  ---------------------------------------------------------------
    0.336182  >>>>>>>>>>   a large industrial plant with many pipes, walkways and railings
    0.316895  >>>>>>>>>    processing plant
    0.302002  >>>>>>>>>    industry
    0.27417   >>>>>>>>     ruhrgebiet
    0.254883  >>>>>>>      plant
    0.22876   >>>>>>       a photo taken on a dark and cloudy day
    0.219482  >>>>>>       a photo taken on a bright and sunny day
    0.211304  >>>>>>       a photo taken at midnight
    0.198608  >>>>>        cars
    0.190552  >>>>>        flowers
    0.181885  >>>>>        bees
    0.180542  >>>>>        cherry blossom
    0.174438  >>>>>        dogs and cats
    0.14917   >>>>         a close up photo of a cherry blossom
```

## Parameters

The only format supported right now is [Open Neural Network Exchange (ONNX)][onnx].

All the currently available OpenAI models have been converted. Some of the IDs were taken from [Open AI models on Hugging Face](https://huggingface.co/openai), others were made up following the same format.

| Model name | Model ID |
| --- | --- |
| RN50 | resnet-50 |
| RN101 | resnet-101 |
| RN50x4 | resnet-50x4 |
| RN50x16 | resnet-50x16 |
| RN50x64 | resnet-50x64 |
| RN50 | resnet-50 |
| RN50 | resnet-50 |
| RN50 | resnet-50 |
| ViT-B/16 | vit-base-patch16 |
| ViT-B/32 | vit-base-patch32 |
| ViT-L/14 | vit-large-patch14 |
| ViT-L/14@336px | vit-large-patch14-336 |

As CLIP is a multimodal model, the original models are split into two separate "modes", one for processing images
and the other for processing text.

| Mode    |
|---------|
| visual  |
| textual |

The models were converted into multiple data types as well.

| Data Type   |
|-------------|
| float16     |
| qint8       |
| quint8      |

## Variants

| Path                                                   | Model ID              | Mode    | Data Type   | Available   |   Size (MB) |
|--------------------------------------------------------|-----------------------|---------|-------------|-------------|-------------|
| models/clip-resnet-50-visual-float32.onnx              | resnet-50             | visual  | float32     | βœ…          |         153 |
| models/clip-resnet-50-visual-float16.onnx              | resnet-50             | visual  | float16     | βœ…          |          77 |
| models/clip-resnet-50-visual-qint8.onnx                | resnet-50             | visual  | qint8       | βœ…          |          39 |
| models/clip-resnet-50-visual-quint8.onnx               | resnet-50             | visual  | quint8      | βœ…          |          39 |
| models/clip-resnet-50-textual-float32.onnx             | resnet-50             | textual | float32     | βœ…          |         255 |
| models/clip-resnet-50-textual-float16.onnx             | resnet-50             | textual | float16     | βœ…          |         128 |
| models/clip-resnet-50-textual-qint8.onnx               | resnet-50             | textual | qint8       | βœ…          |          64 |
| models/clip-resnet-50-textual-quint8.onnx              | resnet-50             | textual | quint8      | βœ…          |          64 |
| models/clip-resnet-101-visual-float32.onnx             | resnet-101            | visual  | float32     | βœ…          |         225 |
| models/clip-resnet-101-visual-float16.onnx             | resnet-101            | visual  | float16     | βœ…          |         112 |
| models/clip-resnet-101-visual-qint8.onnx               | resnet-101            | visual  | qint8       | βœ…          |          57 |
| models/clip-resnet-101-visual-quint8.onnx              | resnet-101            | visual  | quint8      | βœ…          |          57 |
| models/clip-resnet-101-textual-float32.onnx            | resnet-101            | textual | float32     | βœ…          |         254 |
| models/clip-resnet-101-textual-float16.onnx            | resnet-101            | textual | float16     | βœ…          |         127 |
| models/clip-resnet-101-textual-qint8.onnx              | resnet-101            | textual | qint8       | βœ…          |          64 |
| models/clip-resnet-101-textual-quint8.onnx             | resnet-101            | textual | quint8      | βœ…          |          64 |
| models/clip-resnet-50x4-visual-float32.onnx            | resnet-50x4           | visual  | float32     | βœ…          |         348 |
| models/clip-resnet-50x4-visual-float16.onnx            | resnet-50x4           | visual  | float16     | βœ…          |         174 |
| models/clip-resnet-50x4-visual-qint8.onnx              | resnet-50x4           | visual  | qint8       | βœ…          |          88 |
| models/clip-resnet-50x4-visual-quint8.onnx             | resnet-50x4           | visual  | quint8      | βœ…          |          88 |
| models/clip-resnet-50x4-textual-float32.onnx           | resnet-50x4           | textual | float32     | βœ…          |         365 |
| models/clip-resnet-50x4-textual-float16.onnx           | resnet-50x4           | textual | float16     | βœ…          |         183 |
| models/clip-resnet-50x4-textual-qint8.onnx             | resnet-50x4           | textual | qint8       | βœ…          |          92 |
| models/clip-resnet-50x4-textual-quint8.onnx            | resnet-50x4           | textual | quint8      | βœ…          |          92 |
| models/clip-resnet-50x16-visual-float32.onnx           | resnet-50x16          | visual  | float32     | βœ…          |         669 |
| models/clip-resnet-50x16-visual-float16.onnx           | resnet-50x16          | visual  | float16     | βœ…          |         335 |
| models/clip-resnet-50x16-visual-qint8.onnx             | resnet-50x16          | visual  | qint8       | βœ…          |         169 |
| models/clip-resnet-50x16-visual-quint8.onnx            | resnet-50x16          | visual  | quint8      | βœ…          |         169 |
| models/clip-resnet-50x16-textual-float32.onnx          | resnet-50x16          | textual | float32     | βœ…          |         495 |
| models/clip-resnet-50x16-textual-float16.onnx          | resnet-50x16          | textual | float16     | βœ…          |         248 |
| models/clip-resnet-50x16-textual-qint8.onnx            | resnet-50x16          | textual | qint8       | βœ…          |         124 |
| models/clip-resnet-50x16-textual-quint8.onnx           | resnet-50x16          | textual | quint8      | βœ…          |         124 |
| models/clip-resnet-50x64-visual-float32.onnx           | resnet-50x64          | visual  | float32     | βœ…          |        1681 |
| models/clip-resnet-50x64-visual-float16.onnx           | resnet-50x64          | visual  | float16     | βœ…          |         840 |
| models/clip-resnet-50x64-visual-qint8.onnx             | resnet-50x64          | visual  | qint8       | βœ…          |         424 |
| models/clip-resnet-50x64-visual-quint8.onnx            | resnet-50x64          | visual  | quint8      | βœ…          |         424 |
| models/clip-resnet-50x64-textual-float32.onnx          | resnet-50x64          | textual | float32     | βœ…          |         812 |
| models/clip-resnet-50x64-textual-float16.onnx          | resnet-50x64          | textual | float16     | βœ…          |         406 |
| models/clip-resnet-50x64-textual-qint8.onnx            | resnet-50x64          | textual | qint8       | βœ…          |         204 |
| models/clip-resnet-50x64-textual-quint8.onnx           | resnet-50x64          | textual | quint8      | βœ…          |         204 |
| models/clip-resnet-50-visual-float32.onnx              | resnet-50             | visual  | float32     | βœ…          |         153 |
| models/clip-resnet-50-visual-float16.onnx              | resnet-50             | visual  | float16     | βœ…          |          77 |
| models/clip-resnet-50-visual-qint8.onnx                | resnet-50             | visual  | qint8       | βœ…          |          39 |
| models/clip-resnet-50-visual-quint8.onnx               | resnet-50             | visual  | quint8      | βœ…          |          39 |
| models/clip-resnet-50-textual-float32.onnx             | resnet-50             | textual | float32     | βœ…          |         255 |
| models/clip-resnet-50-textual-float16.onnx             | resnet-50             | textual | float16     | βœ…          |         128 |
| models/clip-resnet-50-textual-qint8.onnx               | resnet-50             | textual | qint8       | βœ…          |          64 |
| models/clip-resnet-50-textual-quint8.onnx              | resnet-50             | textual | quint8      | βœ…          |          64 |
| models/clip-resnet-50-visual-float32.onnx              | resnet-50             | visual  | float32     | βœ…          |         153 |
| models/clip-resnet-50-visual-float16.onnx              | resnet-50             | visual  | float16     | βœ…          |          77 |
| models/clip-resnet-50-visual-qint8.onnx                | resnet-50             | visual  | qint8       | βœ…          |          39 |
| models/clip-resnet-50-visual-quint8.onnx               | resnet-50             | visual  | quint8      | βœ…          |          39 |
| models/clip-resnet-50-textual-float32.onnx             | resnet-50             | textual | float32     | βœ…          |         255 |
| models/clip-resnet-50-textual-float16.onnx             | resnet-50             | textual | float16     | βœ…          |         128 |
| models/clip-resnet-50-textual-qint8.onnx               | resnet-50             | textual | qint8       | βœ…          |          64 |
| models/clip-resnet-50-textual-quint8.onnx              | resnet-50             | textual | quint8      | βœ…          |          64 |
| models/clip-resnet-50-visual-float32.onnx              | resnet-50             | visual  | float32     | βœ…          |         153 |
| models/clip-resnet-50-visual-float16.onnx              | resnet-50             | visual  | float16     | βœ…          |          77 |
| models/clip-resnet-50-visual-qint8.onnx                | resnet-50             | visual  | qint8       | βœ…          |          39 |
| models/clip-resnet-50-visual-quint8.onnx               | resnet-50             | visual  | quint8      | βœ…          |          39 |
| models/clip-resnet-50-textual-float32.onnx             | resnet-50             | textual | float32     | βœ…          |         255 |
| models/clip-resnet-50-textual-float16.onnx             | resnet-50             | textual | float16     | βœ…          |         128 |
| models/clip-resnet-50-textual-qint8.onnx               | resnet-50             | textual | qint8       | βœ…          |          64 |
| models/clip-resnet-50-textual-quint8.onnx              | resnet-50             | textual | quint8      | βœ…          |          64 |
| models/clip-vit-base-patch16-visual-float32.onnx       | vit-base-patch16      | visual  | float32     | βœ…          |         345 |
| models/clip-vit-base-patch16-visual-float16.onnx       | vit-base-patch16      | visual  | float16     | βœ…          |         173 |
| models/clip-vit-base-patch16-visual-qint8.onnx         | vit-base-patch16      | visual  | qint8       | βœ…          |          87 |
| models/clip-vit-base-patch16-visual-quint8.onnx        | vit-base-patch16      | visual  | quint8      | βœ…          |          87 |
| models/clip-vit-base-patch16-textual-float32.onnx      | vit-base-patch16      | textual | float32     | βœ…          |         254 |
| models/clip-vit-base-patch16-textual-float16.onnx      | vit-base-patch16      | textual | float16     | βœ…          |         127 |
| models/clip-vit-base-patch16-textual-qint8.onnx        | vit-base-patch16      | textual | qint8       | βœ…          |          64 |
| models/clip-vit-base-patch16-textual-quint8.onnx       | vit-base-patch16      | textual | quint8      | βœ…          |          64 |
| models/clip-vit-base-patch32-visual-float32.onnx       | vit-base-patch32      | visual  | float32     | βœ…          |         352 |
| models/clip-vit-base-patch32-visual-float16.onnx       | vit-base-patch32      | visual  | float16     | βœ…          |         176 |
| models/clip-vit-base-patch32-visual-qint8.onnx         | vit-base-patch32      | visual  | qint8       | βœ…          |          89 |
| models/clip-vit-base-patch32-visual-quint8.onnx        | vit-base-patch32      | visual  | quint8      | βœ…          |          89 |
| models/clip-vit-base-patch32-textual-float32.onnx      | vit-base-patch32      | textual | float32     | βœ…          |         254 |
| models/clip-vit-base-patch32-textual-float16.onnx      | vit-base-patch32      | textual | float16     | βœ…          |         127 |
| models/clip-vit-base-patch32-textual-qint8.onnx        | vit-base-patch32      | textual | qint8       | βœ…          |          64 |
| models/clip-vit-base-patch32-textual-quint8.onnx       | vit-base-patch32      | textual | quint8      | βœ…          |          64 |
| models/clip-vit-large-patch14-visual-float32.onnx      | vit-large-patch14     | visual  | float32     | βœ…          |        1216 |
| models/clip-vit-large-patch14-visual-float16.onnx      | vit-large-patch14     | visual  | float16     | βœ…          |         608 |
| models/clip-vit-large-patch14-visual-qint8.onnx        | vit-large-patch14     | visual  | qint8       | βœ…          |         306 |
| models/clip-vit-large-patch14-visual-quint8.onnx       | vit-large-patch14     | visual  | quint8      | βœ…          |         306 |
| models/clip-vit-large-patch14-textual-float32.onnx     | vit-large-patch14     | textual | float32     | βœ…          |         495 |
| models/clip-vit-large-patch14-textual-float16.onnx     | vit-large-patch14     | textual | float16     | βœ…          |         248 |
| models/clip-vit-large-patch14-textual-qint8.onnx       | vit-large-patch14     | textual | qint8       | βœ…          |         124 |
| models/clip-vit-large-patch14-textual-quint8.onnx      | vit-large-patch14     | textual | quint8      | βœ…          |         124 |
| models/clip-vit-large-patch14-336-visual-float32.onnx  | vit-large-patch14-336 | visual  | float32     | βœ…          |        1217 |
| models/clip-vit-large-patch14-336-visual-float16.onnx  | vit-large-patch14-336 | visual  | float16     | βœ…          |         609 |
| models/clip-vit-large-patch14-336-visual-qint8.onnx    | vit-large-patch14-336 | visual  | qint8       | βœ…          |         307 |
| models/clip-vit-large-patch14-336-visual-quint8.onnx   | vit-large-patch14-336 | visual  | quint8      | βœ…          |         307 |
| models/clip-vit-large-patch14-336-textual-float32.onnx | vit-large-patch14-336 | textual | float32     | βœ…          |         495 |
| models/clip-vit-large-patch14-336-textual-float16.onnx | vit-large-patch14-336 | textual | float16     | βœ…          |         248 |
| models/clip-vit-large-patch14-336-textual-qint8.onnx   | vit-large-patch14-336 | textual | qint8       | βœ…          |         124 |
| models/clip-vit-large-patch14-336-textual-quint8.onnx  | vit-large-patch14-336 | textual | quint8      | βœ…          |         124 |

[onnx]: https://onnx.ai/
[clip]: https://github.com/openai/CLIP
[clip-model-card]: https://github.com/openai/CLIP/blob/b4ae44927b78d0093b556e3ce43cbdcff422017a/model-card.md