license: apache-2.0
language:
- en
tags:
- api
datasets:
- gorilla-llm/APIBench
I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
gorilla-falcon-7b-hf-v0 - GGUF
- Model creator: gorilla-llm
- Original model: gorilla-falcon-7b-hf-v0
Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
As noted on the Llama.cpp GitHub repository, all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.
Good news! I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.
Key Points:
- Stay Informed: Keep an eye on software application release schedules using llama.cpp libraries.
- Monitor Upload Times: Re-quantization is almost done. Watch for updates on my Hugging Face Model pages.
Important Compatibility Note: Old software will work with old Falcon models, but expect updated software to exclusively support the new models.
This change primarily affects Falcon and Starcoder models, with other models remaining unaffected.
Brief
The Gorilla model variant is quite special as it outputs syntactically correct API calls for a vast ammount of known APIs.
Read the original Model Card carefully to get best results. Maybe even consult additional video tutorials.
About GGUF format
gguf
is the current file format used by the ggml
library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the llama.cpp project by Georgi Gerganov
Quantization variants
There is a bunch of quantized files available. How to choose the best for you:
Legacy quants
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are legacy
quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
Falcon 7B models cannot be quantized to K-quants.
K-quants
K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance. So, if possible, use K-quants. With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
Original Model Card:
license: apache-2.0
End of original Model File
Please consider to support my work
Coming Soon: I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.