kimihailv commited on
Commit
a4c8314
1 Parent(s): 8faca53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md CHANGED
@@ -1,3 +1,131 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: feature-extraction
4
+ tags:
5
+ - clip
6
+ - vision
7
+ datasets:
8
+ - Ziyang/yfcc15m
9
+ - conceptual_captions
10
  ---
11
+ <h1 align="center">UForm</h1>
12
+ <h3 align="center">
13
+ Multi-Modal Inference Library<br/>
14
+ For Semantic Search Applications<br/>
15
+ </h3>
16
+
17
+ ---
18
+
19
+ UForm is a Multi-Modal Modal Inference package, designed to encode Multi-Lingual Texts, Images, and, soon, Audio, Video, and Documents, into a shared vector space!
20
+
21
+ This is model card of the __English only model__ with:
22
+
23
+ * 12 layers BERT (6 layers for unimodal encoding and rest layers for multimodal encoding)
24
+ * ViT-L/14 (image resolution is 224x224)
25
+ * Multiple embedding sizes: 64, 256, 512, 768
26
+
27
+
28
+ If you need Multilingual model, check [this](https://huggingface.co/unum-cloud/uform-vl-multilingual).
29
+
30
+ ## Evaluation
31
+
32
+ The following metrics were obtained with multimodal re-ranking (text-to-image retrieval):
33
+
34
+ | Dataset |Recall@1 | Recall@5 | Recall@10 |
35
+ | :------ | ------: | --------: | --------: |
36
+ | Zero-Shot Flickr | 0.693 | 0.875 | 0.923 |
37
+ | Zero-Shot MS-COCO | 0.382 | 0.617 | 0.728 |
38
+
39
+ ImageNet-Top1: 0.518 \
40
+ ImageNet-Top5: 0.756
41
+
42
+ ## Installation
43
+
44
+ ```bash
45
+ pip install uform[onnx-gpu]
46
+ ```
47
+
48
+ ## Usage
49
+
50
+ To load the model:
51
+
52
+ ```python
53
+ import uform
54
+ model = uform.get_model_onnx('unum-cloud/uform-vl-english-large', device='gpu', dtype='fp16')
55
+ ```
56
+
57
+ To encode data:
58
+
59
+ ```python
60
+ from PIL import Image
61
+ text = 'a small red panda in a zoo'
62
+ image = Image.open('red_panda.jpg')
63
+ image_data = model.preprocess_image(image)
64
+ text_data = model.preprocess_text(text)
65
+ image_embedding = model.encode_image(image_data)
66
+ text_embedding = model.encode_text(text_data)
67
+ score, joint_embedding = model.encode_multimodal(
68
+ image_features=image_features,
69
+ text_features=text_features,
70
+ attention_mask=text_data['attention_mask'],
71
+ return_scores=True
72
+ )
73
+ ```
74
+
75
+ To get features:
76
+
77
+ ```python
78
+ image_features, image_embedding = model.encode_image(image_data, return_features=True)
79
+ text_features, text_embedding = model.encode_text(text_data, return_features=True)
80
+ ```
81
+
82
+ These features can later be used to produce joint multimodal encodings faster, as the first layers of the transformer can be skipped:
83
+
84
+ ```python
85
+ joint_embedding = model.encode_multimodal(
86
+ image_features=image_features,
87
+ text_features=text_features,
88
+ attention_mask=text_data['attention_mask']
89
+ )
90
+ ```
91
+
92
+ There are two options to calculate semantic compatibility between an image and a text: [Cosine Similarity](#cosine-similarity) and [Matching Score](#matching-score).
93
+
94
+ ### Cosine Similarity
95
+
96
+ ```python
97
+ import torch.nn.functional as F
98
+ similarity = F.cosine_similarity(image_embedding, text_embedding)
99
+ ```
100
+
101
+ The `similarity` will belong to the `[-1, 1]` range, `1` meaning the absolute match.
102
+
103
+ __Pros__:
104
+
105
+ - Computationally cheap.
106
+ - Only unimodal embeddings are required, unimodal encoding is faster than joint encoding.
107
+ - Suitable for retrieval in large collections.
108
+
109
+ __Cons__:
110
+
111
+ - Takes into account only coarse-grained features.
112
+
113
+
114
+ ### Matching Score
115
+
116
+ Unlike cosine similarity, unimodal embedding are not enough.
117
+ Joint embedding will be needed and the resulting `score` will belong to the `[0, 1]` range, `1` meaning the absolute match.
118
+
119
+ ```python
120
+ score = model.get_matching_scores(joint_embedding)
121
+ ```
122
+
123
+ __Pros__:
124
+
125
+ - Joint embedding captures fine-grained features.
126
+ - Suitable for re-ranking – sorting retrieval result.
127
+
128
+ __Cons__:
129
+
130
+ - Resource-intensive.
131
+ - Not suitable for retrieval in large collections.