llava_eval_image_embed : failed to eval

#6
by jartine - opened

llama.cpp at head (fa046eafbc70bf97dcf39843af0323f19a8c9ac3) fails with:

llava_eval_image_embed : failed to eval

When I run:

./llava-cli -m ~/weights/llava-v1.6-34b.Q5_K_M.gguf --mmproj ~/weights/llava-v1.6-mmproj-f16.gguf --image ~/Pictures/lemurs.jpg -e -p '### User: What do you see?\n### Assistant:' --temp 0

Any idea?

Owner
β€’
edited Mar 22

I'm able to reproduce this. The issue of llava-cli failing to recognize images with these quants, seems to have been introduced in f30ea47a87ed4446ad55adb265755dc9102956a2. I suspect this commit introduced the bug, but it's possible the quants have an issue in the latest version of llama.cpp.

This weekend I'll try to look at the code and submit a fix if there is something obvious as well as rule out any quant issue.

Thank you so much for your work on llamafile

Thanks @cjpais and @jartine . Is there a resolution on this?

Would love to run this locally with llamafile or llama.cpp.

Thanks!

Owner

I don't have a resolution at the moment. I've not had time to look into llama.cpp and attempt to contribute a patch

Hopefully in a week or two I can attempt

@jartine have you tried specifying/increasing the context size, e.g. adding '-c 8192'? That did it for me.

llama.cpp at head (fa046eafbc70bf97dcf39843af0323f19a8c9ac3) fails with:

llava_eval_image_embed : failed to eval

When I run:

./llava-cli -m ~/weights/llava-v1.6-34b.Q5_K_M.gguf --mmproj ~/weights/llava-v1.6-mmproj-f16.gguf --image ~/Pictures/lemurs.jpg -e -p '### User: What do you see?\n### Assistant:' --temp 0

Any idea?

you should check tokens count after encoded , to decide if default token capacity (mybe 512/1024) is enough for you image tokens

Sign up or log in to comment