Upload Qwen2-VL-2B-Instruct-F16.gguf
Browse fileshttps://github.com/ggerganov/llama.cpp/issues/9246
thanks! : https://github.com/HimariO
clone https://github.com/HimariO/llama.cpp/tree/qwen2-vl
1 Download the official `Qwen/Qwen2-VL-2B-Instruct ` checkpoint, then convert the LLM part of the model to GGUF format using `convert_hf_to_gguf.py`:
```
python3 convert_hf_to_gguf.py "/path/to/Qwen2-VL-2B-Instruct/model-dir"
```
2 Convert the vision encoder to GGUF format with qwen2_vl_surgery.py:
```
PYTHONPATH=$PYTHONPATH:$(pwd)/gguf-py python3 examples/llava/qwen2_vl_surgery.py "/path/to/Qwen2-VL-2B-Instruct/model-dir"
```
3 Build the llama-qwen2vl-cli in the same way you would build llama-llava-cli
4 Run the command: (It's recommended to resize the image to a resolution below 640x640, so it won't take forever to run on CPU backend):
```
./llama-qwen2vl-cli -m qwen2-vl-decoder.gguf --mmproj qwen2vl-vision.gguf -p "Describe this image." --image "demo.jpg"
```
- .gitattributes +1 -0
- Qwen2-VL-2B-Instruct-F16.gguf +3 -0
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Qwen2-VL-2B-Instruct-F16.gguf filter=lfs diff=lfs merge=lfs -text
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:487670e446b9e7efbd65ccd44bc34673eb02905dd9b7ce0508e1545294b34db1
|
3 |
+
size 3093667488
|