altndrr commited on
Commit
8a6d537
1 Parent(s): f285d0a

Create README.md (#3)

Browse files

- Create README.md (ffeb9c458072e352cca2ebcf330a9a10c5c1c852)

Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Category Search from External Databases (CaSED)
2
+
3
+ Disclaimer: The model card is taken and modified from the official repository, which can be found [here](https://github.com/altndrr/vic). The paper can be found [here](https://arxiv.org/abs/2306.00917).
4
+
5
+ ## Intended uses & limitations
6
+
7
+ You can use the model for vocabulary-free image classification, i.e. classification with CLIP-like models without a pre-defined list of class names.
8
+
9
+ ## How to use
10
+
11
+ Here is how to use this model:
12
+
13
+ ```python
14
+ import requests
15
+ from PIL import Image
16
+ from transformers import AutoModel, CLIPProcessor
17
+
18
+ # download an image from the internet
19
+ url = "http://images.cocodataset.org/val2017/000000039769.jpg"
20
+ image = Image.open(requests.get(url, stream=True).raw)
21
+
22
+ # load the model and the processor
23
+ model = AutoModel.from_pretrained("altndrr/cased", trust_remote_code=True)
24
+ processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
25
+
26
+ # get the model outputs
27
+ images = processor(images=[image], return_tensors="pt", padding=True)
28
+ outputs = model(images, alpha=0.5)
29
+ labels, scores = outputs["vocabularies"][0], outputs["scores"][0]
30
+
31
+ # print the top 5 most likely labels for the image
32
+ values, indices = scores.topk(5)
33
+ print("\nTop predictions:\n")
34
+ for value, index in zip(values, indices):
35
+ print(f"{labels[index]:>16s}: {100 * value.item():.2f}%")
36
+ ```
37
+
38
+ ## Citation
39
+
40
+ ```latex
41
+ @misc{conti2023vocabularyfree,
42
+ title={Vocabulary-free Image Classification},
43
+ author={Alessandro Conti and Enrico Fini and Massimiliano Mancini and Paolo Rota and Yiming Wang and Elisa Ricci},
44
+ year={2023},
45
+ eprint={2306.00917},
46
+ archivePrefix={arXiv},
47
+ primaryClass={cs.CV}
48
+ }
49
+ ```