Cannot input image in ollama for gemma-3-27b-it-GGUF:Q4_K_M

#3
by kitc - opened

Model variant: gemma-3-27b-it-GGUF:Q4_K_M
I am hosting the model in ollama and use python API to send request to the model

from ollama import Client
client = Client(host=host)
response = client.chat(
    model=model,
    messages=[
        {
            "role": "user",
            "content": "Write the text in the image",
            "images": [image_path]
        }
    ]
)
return response['message']['content']

It raise the error

ollama._types.ResponseError: Failed to create new sequence: failed to process inputs: this model is missing data required for image input
 (status code: 500)

I think the error is raised when the model is not support image input.

I'm having the same issue with the 4b version, I think it's something to do with the vision element not being properly linked to the model, but it is a little beyond my skillset to resolve that. The standard quantized versions hosted on ollama work, so it must be something to do with how it's configured here.

Unsloth AI org

Do you guys know if it works on llama.cpp? :)

the same issue here

Yes, it works with llama.cpp(text only)
I tried 4B version

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment