GGUF format?

#12
by hvgupta1 - opened

Will this model be available in GGUF format?

I tried to convert it myself with ggml-org/gguf-my-repo and got the following: ERROR:hf-to-gguf:Model Idefics3ForConditionalGeneration is not supported.

Also, it's not clear whether llama.cpp's convert_hf_to_gguf.py supports the Idefics3 architecture (source: Bug: Quantizing HuggingFaceM4/Idefics3-8B-Llama3 fails with error #8902)

Hugging Face TB Research org

Hi! currently idefics3 is not supported by llama-cpp, so the conversion script will not work. We will work on this if there is enough interest from the community, so do react to this message if you're interested!

Sign up or log in to comment