Anush008 commited on
Commit
c0e5444
1 Parent(s): a7fc0cd

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-classification
4
+ ---
5
+
6
+ ONNX port of [sentence-transformers/clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32).
7
+
8
+ This model is intended to be used for image classification and similarity searches.
9
+
10
+ ### Usage
11
+
12
+ Here's an example of performing inference using the model with [FastEmbed](https://github.com/qdrant/fastembed).
13
+
14
+ ```py
15
+ from fastembed import ImageEmbedding
16
+
17
+ images = [
18
+ "./path/to/image1.jpg",
19
+ "./path/to/image2.jpg",
20
+ ]
21
+
22
+ model = ImageEmbedding(model_name="Qdrant/clip-ViT-B-32-vision")
23
+ embeddings = list(embedding_model.embed(images))
24
+
25
+ # [
26
+ # array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),
27
+ # array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)
28
+ # ]
29
+ ```