Getting idefics2 into gguf format for use with llama.cpp and/or ollama?

#43
by PaulCapestany - opened

idefics2 sounds potentially very compelling, however, some folks encountered problems attempting to convert idefics2 model into gguf format, which is an issue because there seems to be a good amount of interest in getting this working with llama.cpp as well as ollama (they both make use of gguf)

I'm relatively new to ML and I'm not exactly sure on how to tackle this myself, so any suggestions/advice would very much be appreciated!

requesting gguf format +1

also, does this model have function calling capabilities? I tried most of the multimodal models, they have very good performance but lack function calling capabilities. :(

Sign up or log in to comment