OpenCLIP
PyTorch
clip
apf1 commited on
Commit
62a4fc7
1 Parent(s): bd1bf88

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -32
README.md CHANGED
@@ -65,38 +65,6 @@ These weights are directly usable in OpenCLIP (image + text).
65
  | GeoDE | 0.9253 |
66
  | **Average** | **0.68039** |
67
 
68
- ## Model Usage
69
- ### With OpenCLIP
70
- ```
71
- import torch
72
- import torch.nn.functional as F
73
- from urllib.request import urlopen
74
- from PIL import Image
75
- from open_clip import create_model_from_pretrained, get_tokenizer
76
-
77
- model, preprocess = create_model_from_pretrained('hf-hub:apple/DFN2B-CLIP-ViT-L-14')
78
- tokenizer = get_tokenizer('ViT-L-14')
79
-
80
- image = Image.open(urlopen(
81
- 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
82
- ))
83
- image = preprocess(image).unsqueeze(0)
84
-
85
- labels_list = ["a dog", "a cat", "a donut", "a beignet"]
86
- text = tokenizer(labels_list, context_length=model.context_length)
87
-
88
- with torch.no_grad(), torch.cuda.amp.autocast():
89
- image_features = model.encode_image(image)
90
- text_features = model.encode_text(text)
91
- image_features = F.normalize(image_features, dim=-1)
92
- text_features = F.normalize(text_features, dim=-1)
93
-
94
- text_probs = torch.sigmoid(image_features @ text_features.T * model.logit_scale.exp() + model.logit_bias)
95
-
96
- zipped_list = list(zip(labels_list, [round(p.item(), 3) for p in text_probs[0]]))
97
- print("Label probabilities: ", zipped_list)
98
- ```
99
-
100
  ## Citation
101
  ```bibtex
102
  @article{fang2023data,
 
65
  | GeoDE | 0.9253 |
66
  | **Average** | **0.68039** |
67
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ## Citation
69
  ```bibtex
70
  @article{fang2023data,