generall93 Anush008 commited on
Commit
1aad242
1 Parent(s): a7fc0cd

Create README.md (#1)

Browse files

- Create README.md (c0e5444235f6862180c7001a9aa8332426bc1781)
- Update README.md (a636590e595dbbd798647c9dd4550d5652fba969)


Co-authored-by: Anush Shetty <Anush008@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-classification
4
+ ---
5
+
6
+ ONNX port of [sentence-transformers/clip-ViT-B-32](https://huggingface.co/sentence-transformers/clip-ViT-B-32).
7
+
8
+ This model is intended to be used for image classification and similarity searches.
9
+
10
+ ### Usage
11
+
12
+ Here's an example of performing inference using the model with [FastEmbed](https://github.com/qdrant/fastembed).
13
+
14
+ ```py
15
+ from fastembed import ImageEmbedding
16
+
17
+ images = [
18
+ "./path/to/image1.jpg",
19
+ "./path/to/image2.jpg",
20
+ ]
21
+
22
+ model = ImageEmbedding(model_name="Qdrant/clip-ViT-B-32-vision")
23
+ embeddings = list(model.embed(images))
24
+
25
+ # [
26
+ # array([-0.1115, 0.0097, 0.0052, 0.0195, ...], dtype=float32),
27
+ # array([-0.1019, 0.0635, -0.0332, 0.0522, ...], dtype=float32)
28
+ # ]
29
+ ```