File size: 681 Bytes
da2b7ac
 
932972f
37035b9
da2b7ac
932972f
 
 
 
 
 
bb967dd
1
2
3
4
5
6
7
8
9
10
11
12
---
license: mit
library_name: transformers
pipeline_tag: zero-shot-image-classification
---
You probably do not need this unless you are training your own IP Adapters.

Modified version of the vision encoder of [CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) to handle 448 x 448 inputs 
vs the original 224 x 224 inputs. It will probbaly not work for classification (as is), but will DIP work for for IP+ adapters that use CLIP-ViT-H, though they will need to be
fine tuned a little more. 

Hidden layer outputs go from `(257, 1280)` to `(1025, 1280)`, which can be digested by the Resampler without modification or weight resizing.