Update README.md (#4)
Browse files- Update README.md (dec87ae9f9ff3e7d50b3a0d2ffddc984997964de)
Co-authored-by: Manuel Faysse <manu@users.noreply.huggingface.co>
README.md
CHANGED
@@ -1,204 +1,205 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
- it
|
6 |
-
- fr
|
7 |
-
- de
|
8 |
-
- es
|
9 |
-
base_model:
|
10 |
-
- MrLight/dse-qwen2-2b-mrl-v1
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
-
|
22 |
-
|
23 |
-
- **
|
24 |
-
|
25 |
-
- **
|
26 |
-
|
27 |
-
- **
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
import
|
38 |
-
import
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
79 |
-
|
80 |
-
|
81 |
-
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
-
|
113 |
-
|
114 |
-
|
115 |
-
|
116 |
-
|
117 |
-
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
-
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
-
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
|
153 |
-
|
154 |
-
|
|
155 |
-
|
|
156 |
-
| **
|
157 |
-
|
|
158 |
-
|
|
159 |
-
|
|
160 |
-
| **
|
161 |
-
|
|
162 |
-
|
|
163 |
-
|
|
164 |
-
| **
|
165 |
-
|
|
166 |
-
|
|
167 |
-
|
|
168 |
-
| **
|
169 |
-
|
|
170 |
-
|
|
171 |
-
|
|
172 |
-
| **
|
173 |
-
|
|
174 |
-
|
|
175 |
-
|
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
|
181 |
-
|
182 |
-
|
|
183 |
-
|
|
184 |
-
| **
|
185 |
-
|
|
186 |
-
|
|
187 |
-
|
|
188 |
-
| **
|
189 |
-
|
|
190 |
-
|
|
191 |
-
|
|
192 |
-
| **
|
193 |
-
|
|
194 |
-
|
|
195 |
-
|
|
196 |
-
| **
|
197 |
-
|
|
198 |
-
|
|
199 |
-
|
|
200 |
-
| **
|
201 |
-
|
|
202 |
-
|
|
203 |
-
|
|
204 |
-
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- it
|
6 |
+
- fr
|
7 |
+
- de
|
8 |
+
- es
|
9 |
+
base_model:
|
10 |
+
- MrLight/dse-qwen2-2b-mrl-v1
|
11 |
+
tags:
|
12 |
+
- vidore
|
13 |
+
---
|
14 |
+
|
15 |
+
# mcdse-2b-v1
|
16 |
+
|
17 |
+
![](cover.png)
|
18 |
+
|
19 |
+
mcdse-2b-v1 is an experimental model designed for multilingual visual document retrieval.
|
20 |
+
|
21 |
+
This model allows you to embed page/slide screenshots and query them using natural language. Whether it's tables, graphs, charts, schemas, images, or text, mcdse-2b-v1 encodes everything into a single embedding vector, eliminating the need for traditional OCR, document layout analysis, reading order detection, chunking, table/formula extraction...
|
22 |
+
|
23 |
+
- **Understands ๐ฎ๐น Italian, ๐ช๐ธ Spanish, ๐ฌ๐ง English, ๐ซ๐ท French and ๐ฉ๐ช German**
|
24 |
+
|
25 |
+
- **Matryoshka Representation Learning:** shrink embeddings from 1536 to 256 dimensions while maintaining 95% of the quality. A 6x reduction with negligible impact on performance!
|
26 |
+
|
27 |
+
- **Top-tier Binarization**: 768-dimensional binary vectors retain 99% retrieval quality of the original 1536-dimensional float vectors. With binary vectors, you can encode **100 million multilingual pages in just 10GB**.
|
28 |
+
|
29 |
+
- **Fast vLLM inference:** run inference on vLLM and efficiently serve embeddings at scale, production ready.
|
30 |
+
|
31 |
+
For more information about this model or how it was trained, visit the [announcement blogpost](https://huggingface.co/blog/marco/announcing-mcdse-2b-v1).
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
**Initialize model and processor**
|
36 |
+
```python
|
37 |
+
from transformers import AutoProcessor, Qwen2VLForConditionalGeneration
|
38 |
+
from PIL import Image
|
39 |
+
import torch
|
40 |
+
import math
|
41 |
+
|
42 |
+
model = Qwen2VLForConditionalGeneration.from_pretrained(
|
43 |
+
'marco/mcdse-2b-v1',
|
44 |
+
attn_implementation="flash_attention_2",
|
45 |
+
torch_dtype=torch.bfloat16,
|
46 |
+
device_map="cuda:0"
|
47 |
+
).eval()
|
48 |
+
|
49 |
+
min_pixels = 1 * 28 * 28
|
50 |
+
max_pixels = 960 * 28 * 28
|
51 |
+
|
52 |
+
processor = AutoProcessor.from_pretrained(
|
53 |
+
'marco/mcdse-2b-v1',
|
54 |
+
min_pixels=min_pixels,
|
55 |
+
max_pixels=max_pixels
|
56 |
+
)
|
57 |
+
|
58 |
+
model.padding_side = "left"
|
59 |
+
processor.tokenizer.padding_side = "left"
|
60 |
+
|
61 |
+
document_prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>What is shown in this image?<|im_end|>\n<|endoftext|>"
|
62 |
+
|
63 |
+
query_prompt = "<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>Query: %s<|im_end|>\n<|endoftext|>"
|
64 |
+
```
|
65 |
+
|
66 |
+
**Encode queries**
|
67 |
+
```python
|
68 |
+
def encode_queries(queries: list[str], dimension: int):
|
69 |
+
dummy_image = Image.new('RGB', (56, 56))
|
70 |
+
inputs = processor(
|
71 |
+
text=[query_prompt % x for x in queries],
|
72 |
+
images=[dummy_image for _ in queries],
|
73 |
+
videos=None,
|
74 |
+
padding='longest',
|
75 |
+
return_tensors='pt'
|
76 |
+
).to('cuda:0')
|
77 |
+
|
78 |
+
cache_position = torch.arange(0, len(queries))
|
79 |
+
inputs = model.prepare_inputs_for_generation(
|
80 |
+
**inputs, cache_position=cache_position, use_cache=False)
|
81 |
+
|
82 |
+
with torch.no_grad():
|
83 |
+
output = self.model(
|
84 |
+
**inputs,
|
85 |
+
return_dict=True,
|
86 |
+
output_hidden_states=True
|
87 |
+
)
|
88 |
+
|
89 |
+
embeddings = output.hidden_states[-1][:, -1]
|
90 |
+
return torch.nn.functional.normalize(embeddings[:, :dimension], p=2, dim=-1)
|
91 |
+
```
|
92 |
+
|
93 |
+
**Encode documents**
|
94 |
+
```python
|
95 |
+
def round_by_factor(number: float, factor: int) -> int:
|
96 |
+
return round(number / factor) * factor
|
97 |
+
|
98 |
+
def ceil_by_factor(number: float, factor: int) -> int:
|
99 |
+
return math.ceil(number / factor) * factor
|
100 |
+
|
101 |
+
def floor_by_factor(number: float, factor: int) -> int:
|
102 |
+
return math.floor(number / factor) * factor
|
103 |
+
|
104 |
+
def smart_resize(height: int, width: int) -> tuple[int, int]:
|
105 |
+
h_bar = max(28, round_by_factor(height, 28))
|
106 |
+
w_bar = max(28, round_by_factor(width, 28))
|
107 |
+
if h_bar * w_bar > max_pixels:
|
108 |
+
beta = math.sqrt((height * width) / max_pixels)
|
109 |
+
h_bar = floor_by_factor(height / beta, 28)
|
110 |
+
w_bar = floor_by_factor(width / beta, 28)
|
111 |
+
elif h_bar * w_bar < min_pixels:
|
112 |
+
beta = math.sqrt(min_pixels / (height * width))
|
113 |
+
h_bar = ceil_by_factor(height * beta, 28)
|
114 |
+
w_bar = ceil_by_factor(width * beta, 28)
|
115 |
+
return h_bar, w_bar
|
116 |
+
|
117 |
+
def resize(image: Image.Image):
|
118 |
+
new_size = smart_resize(image.height, image.width)
|
119 |
+
return image.resize(new_size)
|
120 |
+
|
121 |
+
def encode_documents(documents: list[Image.Image], dimension: int):
|
122 |
+
inputs = processor(
|
123 |
+
text=[document_prompt] * len(documents),
|
124 |
+
images=[resize(x) for x in documents],
|
125 |
+
videos=None,
|
126 |
+
padding='longest',
|
127 |
+
return_tensors='pt'
|
128 |
+
).to('cuda:0')
|
129 |
+
|
130 |
+
cache_position = torch.arange(0, len(queries))
|
131 |
+
inputs = model.prepare_inputs_for_generation(
|
132 |
+
**inputs, cache_position=cache_position, use_cache=False)
|
133 |
+
|
134 |
+
with torch.no_grad():
|
135 |
+
output = self.model(
|
136 |
+
**inputs,
|
137 |
+
return_dict=True,
|
138 |
+
output_hidden_states=True
|
139 |
+
)
|
140 |
+
|
141 |
+
embeddings = output.hidden_states[-1][:, -1]
|
142 |
+
return torch.nn.functional.normalize(embeddings[:, :dimension], p=2, dim=-1)
|
143 |
+
```
|
144 |
+
|
145 |
+
### vLLM
|
146 |
+
This model supports vLLM, visit the [announcement blogpost](https://huggingface.co/blog/marco/announcing-mcdse-2b-v1#deployment) to know more.
|
147 |
+
|
148 |
+
## Results
|
149 |
+
Given the scarcity of publicly available datasets for multilingual document image retrieval, the model has been evaluated using a custom-built dataset. This eval dataset was specifically designed to benchmark the model's performance across various languages.
|
150 |
+
|
151 |
+
### NDCG@5 (float)
|
152 |
+
| | Average | English | Italian | Spanish | French | German |
|
153 |
+
|---------------------|------------|------------|------------|------------|------------|------------|
|
154 |
+
| **1536 dimensions** | | | | | | |
|
155 |
+
| dse-qwen2-2b-mrl-v1 | 79.5 | 79.2 | 80.2 | 77.9 | 80.6 | 79.6 |
|
156 |
+
| mcdse-2b-v1 | **82.2** | **80.8** | **81.2** | **80.7** | **84.5** | **83.8** |
|
157 |
+
| | **+3.28%** | **+1.98%** | **+1.23%** | **+3.47%** | **+4.62%** | **+5.01%** |
|
158 |
+
| **1024 dimensions** | | | | | | |
|
159 |
+
| dse-qwen2-2b-mrl-v1 | 78.3 | 78.8 | 78.5 | 76.5 | 80 | 77.5 |
|
160 |
+
| mcdse-2b-v1 | **81.7** | **80** | **80.2** | **80.1** | **84** | **84.3** |
|
161 |
+
| | **+4.23%** | **+1.75%** | **+2.12%** | **+4.49%** | **+4.76%** | **+8.07%** |
|
162 |
+
| **768 dimensions** | | | | | | |
|
163 |
+
| dse-qwen2-2b-mrl-v1 | 77.8 | 78.4 | 78.3 | 75.6 | 80.8 | 75.9 |
|
164 |
+
| mcdse-2b-v1 | **81.1** | **79.6** | **79.9** | **79.2** | **83.3** | **83.3** |
|
165 |
+
| | **+4.02%** | **+1.51%** | **+2.00%** | **+4.55%** | **+3.00%** | **+8.88%** |
|
166 |
+
| **512 dimensions** | | | | | | |
|
167 |
+
| dse-qwen2-2b-mrl-v1 | 76.2 | 77.6 | 75.9 | 73.1 | 79.2 | 75.2 |
|
168 |
+
| mcdse-2b-v1 | **79.3** | **78.5** | **79.1** | **75.8** | **81.4** | **81.7** |
|
169 |
+
| | **+3.91%** | **+1.15%** | **+4.05%** | **+3.56%** | **+2.70%** | **+7.96%** |
|
170 |
+
| **384 dimensions** | | | | | | |
|
171 |
+
| dse-qwen2-2b-mrl-v1 | 75.7 | 76.2 | 75.5 | 74.6 | 78.4 | 74 |
|
172 |
+
| mcdse-2b-v1 | **78.8** | **77.5** | **78.5** | **76.1** | **80.4** | **81.4** |
|
173 |
+
| | **+3.86%** | **+1.68%** | **+3.82%** | **+1.97%** | **+2.49%** | **+9.09%** |
|
174 |
+
| **256 dimensions** | | | | | | |
|
175 |
+
| dse-qwen2-2b-mrl-v1 | 73.5 | 74.5 | 73.6 | 70.6 | 74.8 | 73.8 |
|
176 |
+
| mcdse-2b-v1 | **78.1** | **78.5** | **77.6** | **76.2** | **80.1** | **77.9** |
|
177 |
+
| | **+5.89%** | **+5.10%** | **+5.15%** | **+7.35%** | **+6.62%** | **+5.26%** |
|
178 |
+
|
179 |
+
### NDCG@5 (binary)
|
180 |
+
| | Average | English | Italian | Spanish | French | German |
|
181 |
+
|---------------------|-------------|-------------|-------------|-------------|-------------|-------------|
|
182 |
+
| **1536 dimensions** | | | | | | |
|
183 |
+
| dse-qwen2-2b-mrl-v1 | 75.0 | 75.8 | 75.4 | 72.4 | 78.1 | 73.2 |
|
184 |
+
| mcdse-2b-v1 | **80.6** | **79.5** | **76.9** | **81.9** | **83.7** | **80.8** |
|
185 |
+
| | **+6.93%** | **+4.65%** | **+1.95%** | **+11.60%** | **+6.69%** | **+9.41%** |
|
186 |
+
| **1024 dimensions** | | | | | | |
|
187 |
+
| dse-qwen2-2b-mrl-v1 | 72.2 | 74.8 | 71 | 70.8 | 74.6 | 69.6 |
|
188 |
+
| mcdse-2b-v1 | **79.3** | **78.4** | **75.4** | **80.8** | **82.6** | **79.5** |
|
189 |
+
| | **+9.05%** | **+4.59%** | **+5.84%** | **+12.38%** | **+9.69%** | **+12.45%** |
|
190 |
+
| **768 dimensions** | | | | | | |
|
191 |
+
| dse-qwen2-2b-mrl-v1 | 70.1 | 71.7 | 69.3 | 69.8 | 73.7 | 65.9 |
|
192 |
+
| mcdse-2b-v1 | **78.8** | **77.1** | **75.4** | **80** | **83** | **78.5** |
|
193 |
+
| | **+11.07%** | **+7.00%** | **+8.09%** | **+12.75%** | **+11.20%** | **+16.05%** |
|
194 |
+
| **512 dimensions** | | | | | | |
|
195 |
+
| dse-qwen2-2b-mrl-v1 | 66.5 | 70 | 65.4 | 63.7 | 70.2 | 63 |
|
196 |
+
| mcdse-2b-v1 | **76.6** | **74.8** | **74.2** | **77.7** | **80.9** | **75.3** |
|
197 |
+
| | **+13.21%** | **+6.42%** | **+11.86%** | **+18.02%** | **+13.23%** | **+16.33%** |
|
198 |
+
| **384 dimensions** | | | | | | |
|
199 |
+
| dse-qwen2-2b-mrl-v1 | 61.1 | 62.7 | 58.5 | 58.6 | 65.1 | 60.8 |
|
200 |
+
| mcdse-2b-v1 | **74.3** | **74.5** | **71.4** | **77.2** | **75.2** | **73** |
|
201 |
+
| | **+17.67%** | **+15.84%** | **+18.07%** | **+24.09%** | **+13.43%** | **+16.71%** |
|
202 |
+
| **256 dimensions** | | | | | | |
|
203 |
+
| dse-qwen2-2b-mrl-v1 | 54.3 | 59 | 56.5 | 53.6 | 53 | 49.6 |
|
204 |
+
| mcdse-2b-v1 | **70.9** | **72.6** | **66.4** | **73.5** | **72.6** | **69.2** |
|
205 |
+
| | **+23.31%** | **+18.73%** | **+14.91%** | **+27.07%** | **+27.00%** | **+28.32%** |
|