OpenCLIP
Yanqing0327 commited on
Commit
a22f97a
·
verified ·
1 Parent(s): 66f1cee

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - UCSC-VLAA/Recap-DataComp-1B
5
+ ---
6
+ # Model Card for ViT-H-14-CLIPS-224-Recap-DataComp-1B
7
+
8
+ ## Model Details
9
+
10
+ <!-- Provide the basic links for the model. -->
11
+
12
+ - **Repository:** https://github.com/UCSC-VLAA/CLIPS
13
+ - **Paper:** https://arxiv.org/abs/2411.16828
14
+ - **Project Page:** https://ucsc-vlaa.github.io/CLIPS/
15
+
16
+ ## Model Usage
17
+ ### With OpenCLIP
18
+ #### Note: Due to differences in the default epsilon values for LayerNorm initialization between JAX and PyTorch, we made some modifications in open_clip/transformer.py to align the model's behavior. Refer to https://github.com/UCSC-VLAA/CLIPS for more details.
19
+ ```
20
+ import torch
21
+ import torch.nn.functional as F
22
+ from urllib.request import urlopen
23
+ from PIL import Image
24
+ from open_clip import create_model_from_pretrained, get_tokenizer
25
+
26
+ model, preprocess = create_model_from_pretrained('hf-hub:UCSC-VLAA/ViT-H-14-CLIPS-224-Recap-DataComp-1B')
27
+ tokenizer = get_tokenizer('hf-hub:UCSC-VLAA/ViT-H-14-CLIPS-224-Recap-DataComp-1B')
28
+
29
+ image = Image.open(urlopen(
30
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
31
+ ))
32
+ image = preprocess(image).unsqueeze(0)
33
+
34
+ text = tokenizer(["a diagram", "a dog", "a cat", "a beignet"], context_length=model.context_length)
35
+
36
+ with torch.no_grad(), torch.cuda.amp.autocast():
37
+ image_features = model.encode_image(image)
38
+ text_features = model.encode_text(text)
39
+ image_features = F.normalize(image_features, dim=-1)
40
+ text_features = F.normalize(text_features, dim=-1)
41
+
42
+ text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
43
+
44
+ print("Label probs:", text_probs) # prints: [[0., 0., 0., 1.0]]
45
+ ```