Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- Qwen/Qwen2-VL-2B-Instruct
|
4 |
+
---
|
5 |
+
|
6 |
+
This is the [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct) model, converted to OpenVINO, with fp16 weights
|
7 |
+
|
8 |
+
Use OpenVINO GenAI to run inference on this model:
|
9 |
+
|
10 |
+
- Install OpenVINO GenAI nightly and pillow:
|
11 |
+
```
|
12 |
+
pip install --upgrade --pre pillow openvino-genai openvino openvino-tokenizers --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly
|
13 |
+
```
|
14 |
+
- Download a test image: `curl -O "https://storage.openvinotoolkit.org/test_data/images/dog.jpg"`
|
15 |
+
- Run inference:
|
16 |
+
|
17 |
+
```python
|
18 |
+
import numpy as np
|
19 |
+
import openvino as ov
|
20 |
+
import openvino_genai
|
21 |
+
from PIL import Image
|
22 |
+
|
23 |
+
# Choose GPU instead of CPU in the line below to run the model on Intel integrated or discrete GPU
|
24 |
+
pipe = openvino_genai.VLMPipeline("./Qwen2-VL-2B-Instruct-ov-fp16", "CPU")
|
25 |
+
pipe.start_chat()
|
26 |
+
|
27 |
+
image = Image.open("dog.jpg")
|
28 |
+
image_data = np.array(image.getdata()).reshape(1, image.size[1], image.size[0], 3).astype(np.uint8)
|
29 |
+
image_data = ov.Tensor(image_data)
|
30 |
+
|
31 |
+
prompt = "Can you describe the image?"
|
32 |
+
result = pipe.generate(prompt, image=image_data, max_new_tokens=100)
|
33 |
+
print(result.texts[0])
|
34 |
+
```
|
35 |
+
|
36 |
+
See [OpenVINO GenAI repository](https://github.com/openvinotoolkit/openvino.genai?tab=readme-ov-file#performing-visual-language-text-generation)
|