I wonder how did you create the GGUF model?

#1
by davideuler - opened

I've tried several gorilla-openfunctions-v2 GGUF models on github. Also I created q8_0 on my machine. However all of these models does not work. When run with llama.cpp,
./main -ngl 33 -m gorilla-openfunctions-v2.IQ3_M.gguf --color -c 16384 --temp 0 -p "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.\n### Instruction: Which function can I use to transpose a image? \n### Response: "

These models all end up with error message:
libc++abi: terminating due to uncaught exception of type std::out_of_range: unordered_map::at: key not found

But your model here works. Thanks for the great job. How did you create the GGUF models?

Hi there David!

Thanks for the kinda note. I ran into a similar issue myself! Converting with llama.cpp tends to require a bit of trial and error unless you're confident in a few key details about the model, namely its architecture and vocab type.

I use a wrapper script I created to help me coordinate these conversions more easily, you can check it out here.

For this model, I believe I needed to use the normal convert.py which is ideal for llama & mistral architectures (Gorilla follows Llama, as you can see if their config.json file), along with the bpe (byte-paid encoding) vocab type (the convert.py script defaults to spm for sentence piece tokenizers, and also support's huggingface fast tokenizers, hfft). Finally, the expected vocab size didn't match the model's vocab size on the first attempt, so I added the pad-vocab option which llama.cpp exposes.

so to convert this model with my script, I ran the following command:

./autogguf -uv gorilla-llm/gorilla-openfunctions-v2 -vt bpe -pv

this means:

  • update to latest llama.cpp & install/compile dependencies (-u)
  • print verbose script output (-v)
  • model type of llama (-m, omitted as llama is default)
  • vocab type of bpe (-vt)
  • pad vocab enabled (-pv)

Let me know if you have further questions; have a great one!

Thanks for the detail explanation. I appreciate your work, it's helpful. I've checked out the project.

Sign up or log in to comment