ostris commited on
Commit
932972f
1 Parent(s): 3a433ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -0
README.md CHANGED
@@ -1,3 +1,12 @@
1
  ---
2
  license: mit
 
 
3
  ---
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ pipeline_tag: zero-shot-classification
5
  ---
6
+ You probably do not need this unless you are training your own IP Adapters.
7
+
8
+ Modified version of the vision encoder of [CLIP-ViT-H-14-laion2B-s32B-b79K](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) to handle 448 x 448 inputs
9
+ vs the original 224 x 224 inputs. It will probbaly not work for classification (as is), but will DIP work for for IP+ adapters that use CLIP-ViT-H, though they will need to be
10
+ fine tuned a little more.
11
+
12
+ Hidden layer outputs go from `(257, 1280)` to `(1025, 1280), which can be digested by the Resampler without modification or weight resizing.