rafox2005 commited on
Commit
64a2fe3
1 Parent(s): 7356ae4

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +166 -0
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - AIDC-AI/Ovis-dataset
4
+ language:
5
+ - en
6
+ library_name: transformers
7
+ license: apache-2.0
8
+ pipeline_tag: image-text-to-text
9
+ tags:
10
+ - MLLM
11
+ - autoquant
12
+ - exl2
13
+ ---
14
+
15
+ # Ovis1.6-Llama3.2-3B
16
+ <div align="center">
17
+ <img src=https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/3IK823BZ8w-mz_QfeYkDn.png width="30%"/>
18
+ </div>
19
+
20
+ ## Introduction
21
+ [GitHub](https://github.com/AIDC-AI/Ovis) | [Demo](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Llama3.2-3B) | [Paper](https://arxiv.org/abs/2405.20797)
22
+
23
+ We are thrilled to announce the open-sourcing of **Ovis1.6-Llama3.2-3B**, an integral part of the Ovis1.6 family. This cutting-edge model currently sets the benchmark as the state-of-the-art (SOTA) solution for edge-side multimodal tasks.
24
+
25
+ The Ovis family employs an innovative Multimodal Large Language Model (MLLM) architecture, specifically designed to structurally align visual and textual embeddings. Ovis1.6-Llama3.2-3B excels in common industry benchmarks, surpassing numerous open-source and proprietary multimodal models. Moreover, it is also particularly well-suited for local intelligence, on-device computing, and edge computing scenarios.
26
+
27
+ <div align="center">
28
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/658a8a837959448ef5500ce5/TIlymOb86R6_Mez3bpmcB.png" width="100%" />
29
+ </div>
30
+
31
+ ## Model
32
+ Built upon Ovis1.5, **Ovis1.6** further enhances high-resolution image processing, is trained on a larger, more diverse, and higher-quality dataset, and refines the training process with DPO training following instruction-tuning.
33
+
34
+ | Ovis MLLMs | ViT | LLM | Model Weights | Demo |
35
+ |:------------------|:-----------:|:------------------:|:---------------------------------------------------------------:|:----------------------------------------------------------------:|
36
+ | Ovis1.6-Gemma2-9B | Siglip-400M | Gemma2-9B-It | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Gemma2-9B) | [Space](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Gemma2-9B) |
37
+ | Ovis1.6-Llama3.2-3B | Siglip-400M | Llama-3.2-3B-Instruct | [Huggingface](https://huggingface.co/AIDC-AI/Ovis1.6-Llama3.2-3B) | [Space](https://huggingface.co/spaces/AIDC-AI/Ovis1.6-Llama3.2-3B) |
38
+
39
+ ## Performance
40
+ **Ovis1.6-Llama3.2-3B** leads the [OpenCompass](https://github.com/open-compass/VLMEvalKit) benchmark among open-source MLLMs under **4B** parameters, even surpassing Llama-3.2-11B-Vision-Instruct.
41
+
42
+ <div align="center">
43
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/637aebed7ce76c3b834cea37/fFm7B-PMh6Y2eaBZ4QUSc.png" width="100%" />
44
+ </div>
45
+
46
+
47
+ ## Usage
48
+ Below is a code snippet to run Ovis with multimodal inputs. For additional usage instructions, including inference wrapper and Gradio UI, please refer to [Ovis GitHub](https://github.com/AIDC-AI/Ovis?tab=readme-ov-file#inference).
49
+ ```bash
50
+ pip install torch==2.2.0 transformers==4.44.2 numpy==1.24.3 pillow==10.3.0
51
+ ```
52
+ ```bash
53
+ pip install flash-attn --no-build-isolation
54
+ ```
55
+ ```python
56
+ import torch
57
+ from PIL import Image
58
+ from transformers import AutoModelForCausalLM
59
+
60
+ # load model
61
+ model = AutoModelForCausalLM.from_pretrained("AIDC-AI/Ovis1.6-Llama3.2-3B",
62
+ torch_dtype=torch.bfloat16,
63
+ multimodal_max_length=8192,
64
+ trust_remote_code=True).cuda()
65
+ text_tokenizer = model.get_text_tokenizer()
66
+ visual_tokenizer = model.get_visual_tokenizer()
67
+
68
+ # enter image path and prompt
69
+ image_path = input("Enter image path: ")
70
+ image = Image.open(image_path)
71
+ text = input("Enter prompt: ")
72
+ query = f'<image>\n{text}'
73
+
74
+ # format conversation
75
+ prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
76
+ attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
77
+ input_ids = input_ids.unsqueeze(0).to(device=model.device)
78
+ attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
79
+ pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
80
+
81
+ # generate output
82
+ with torch.inference_mode():
83
+ gen_kwargs = dict(
84
+ max_new_tokens=1024,
85
+ do_sample=False,
86
+ top_p=None,
87
+ top_k=None,
88
+ temperature=None,
89
+ repetition_penalty=None,
90
+ eos_token_id=model.generation_config.eos_token_id,
91
+ pad_token_id=text_tokenizer.pad_token_id,
92
+ use_cache=True
93
+ )
94
+ output_ids = model.generate(input_ids, pixel_values=pixel_values, attention_mask=attention_mask, **gen_kwargs)[0]
95
+ output = text_tokenizer.decode(output_ids, skip_special_tokens=True)
96
+ print(f'Output:\n{output}')
97
+ ```
98
+
99
+ <details>
100
+ <summary>Batch inference</summary>
101
+
102
+ ```python
103
+ batch_inputs = [
104
+ ('example_image1.jpeg', 'Describe the content of this image.'),
105
+ ('example_image2.jpeg', 'What is the equation in the image?')
106
+ ]
107
+
108
+ batch_input_ids = []
109
+ batch_attention_mask = []
110
+ batch_pixel_values = []
111
+
112
+ for image_path, text in batch_inputs:
113
+ image = Image.open(image_path)
114
+ query = f'<image>\n{text}'
115
+ prompt, input_ids, pixel_values = model.preprocess_inputs(query, [image])
116
+ attention_mask = torch.ne(input_ids, text_tokenizer.pad_token_id)
117
+ input_ids = input_ids.unsqueeze(0).to(device=model.device)
118
+ attention_mask = attention_mask.unsqueeze(0).to(device=model.device)
119
+ pixel_values = [pixel_values.to(dtype=visual_tokenizer.dtype, device=visual_tokenizer.device)]
120
+ batch_input_ids.append(input_ids.squeeze())
121
+ batch_attention_mask.append(attention_mask.squeeze())
122
+ batch_pixel_values.append(pixel_values)
123
+
124
+ pad_batch_input_ids = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_input_ids],batch_first=True, padding_value=0.0).flip(dims=[1])
125
+ pad_batch_input_ids = pad_batch_input_ids[:,-model.config.multimodal_max_length:]
126
+ pad_batch_attention_mask = torch.nn.utils.rnn.pad_sequence([i.flip(dims=[0]) for i in batch_attention_mask],batch_first=True, padding_value=False).flip(dims=[1])
127
+ pad_batch_attention_mask = pad_batch_attention_mask[:,-model.config.multimodal_max_length:]
128
+ pad_batch_pixel_values = [item for sublist in batch_pixel_values for item in sublist]
129
+
130
+ # generate output
131
+ with torch.inference_mode():
132
+ gen_kwargs = dict(
133
+ max_new_tokens=1024,
134
+ do_sample=False,
135
+ top_p=None,
136
+ top_k=None,
137
+ temperature=None,
138
+ repetition_penalty=None,
139
+ eos_token_id=model.generation_config.eos_token_id,
140
+ pad_token_id=text_tokenizer.pad_token_id,
141
+ use_cache=True
142
+ )
143
+ output_ids = model.generate(pad_batch_input_ids, pixel_values=pad_batch_pixel_values, attention_mask=pad_batch_attention_mask, **gen_kwargs)
144
+
145
+ for i in range(len(batch_input_ids)):
146
+ output = text_tokenizer.decode(output_ids[i], skip_special_tokens=True)
147
+ print(f'Output_{i}:\n{output}')
148
+ ```
149
+ </details>
150
+
151
+ ## Citation
152
+ If you find Ovis useful, please cite the paper
153
+ ```
154
+ @article{lu2024ovis,
155
+ title={Ovis: Structural Embedding Alignment for Multimodal Large Language Model},
156
+ author={Shiyin Lu and Yang Li and Qing-Guo Chen and Zhao Xu and Weihua Luo and Kaifu Zhang and Han-Jia Ye},
157
+ year={2024},
158
+ journal={arXiv:2405.20797}
159
+ }
160
+ ```
161
+
162
+ ## License
163
+ This project is licensed under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0.txt) (SPDX-License-Identifier: Apache-2.0).
164
+
165
+ ## Disclaimer
166
+ We used compliance-checking algorithms during the training process, to ensure the compliance of the trained model to the best of our ability. Due to the complexity of the data and the diversity of language model usage scenarios, we cannot guarantee that the model is completely free of copyright issues or improper content. If you believe anything infringes on your rights or generates improper content, please contact us, and we will promptly address the matter.