How to use it

  1. To get the Code:
git clone https://github.com/HimariO/llama.cpp.git
cd llama.cpp
git switch qwen2-vl
  1. nano Makefile #(to add llama-qwen2vl-cli)
diff --git a/Makefile b/Makefile
index 8a903d7e..51403be2 100644
--- a/Makefile
+++ b/Makefile
@@ -1485,6 +1485,14 @@ libllava.a: examples/llava/llava.cpp \
     $(OBJ_ALL)
     $(CXX) $(CXXFLAGS) -static -fPIC -c $< -o $@ -Wno-cast-qual
 
+llama-qwen2vl-cli: examples/llava/qwen2vl-cli.cpp \
+	examples/llava/llava.cpp \
+	examples/llava/llava.h \
+	examples/llava/clip.cpp \
+	examples/llava/clip.h \
+	$(OBJ_ALL)
+	$(CXX) $(CXXFLAGS) $< $(filter-out %.h $<,$^) -o $@ $(LDFLAGS) -Wno-cast-qual
+
 llama-llava-cli: examples/llava/llava-cli.cpp \
     examples/llava/llava.cpp \
     examples/llava/llava.h \
  1. Metal Build
cmake . -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=$(which nvcc) -DTCNN_CUDA_ARCHITECTURES=61
make -j35
  1. RUN
./bin/llama-qwen2vl-cli -m ./Cylingo/XinYuan-VL-2B-GGUF/XinYuan-VL-2B-GGUF-Q4_K_M.gguf --mmproj ./Cylingo/XinYuan-VL-2B-GGUF/qwen2vl-vision.gguf -p "Describe the image" --image "./Cylingo/XinYuan-VL-2B-GGUF/1.png"
Downloads last month
224
GGUF
Model size
1.54B params
Architecture
qwen2vl

4-bit

16-bit

Inference Examples
Inference API (serverless) does not yet support transformers models for this pipeline type.

Collection including Cylingo/XinYuan-VL-2B-GGUF