Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
llama2-7b-hf-guanaco - GGUF
- Model creator: https://huggingface.co/TheTravellingEngineer/
- Original model: https://huggingface.co/TheTravellingEngineer/llama2-7b-hf-guanaco/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [llama2-7b-hf-guanaco.Q2_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q2_K.gguf) | Q2_K | 2.36GB |
| [llama2-7b-hf-guanaco.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [llama2-7b-hf-guanaco.IQ3_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [llama2-7b-hf-guanaco.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [llama2-7b-hf-guanaco.IQ3_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [llama2-7b-hf-guanaco.Q3_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q3_K.gguf) | Q3_K | 3.07GB |
| [llama2-7b-hf-guanaco.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [llama2-7b-hf-guanaco.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [llama2-7b-hf-guanaco.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [llama2-7b-hf-guanaco.Q4_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q4_0.gguf) | Q4_0 | 3.56GB |
| [llama2-7b-hf-guanaco.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [llama2-7b-hf-guanaco.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [llama2-7b-hf-guanaco.Q4_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q4_K.gguf) | Q4_K | 3.8GB |
| [llama2-7b-hf-guanaco.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [llama2-7b-hf-guanaco.Q4_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q4_1.gguf) | Q4_1 | 3.95GB |
| [llama2-7b-hf-guanaco.Q5_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q5_0.gguf) | Q5_0 | 4.33GB |
| [llama2-7b-hf-guanaco.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [llama2-7b-hf-guanaco.Q5_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q5_K.gguf) | Q5_K | 4.45GB |
| [llama2-7b-hf-guanaco.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [llama2-7b-hf-guanaco.Q5_1.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q5_1.gguf) | Q5_1 | 4.72GB |
| [llama2-7b-hf-guanaco.Q6_K.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q6_K.gguf) | Q6_K | 5.15GB |
| [llama2-7b-hf-guanaco.Q8_0.gguf](https://huggingface.co/RichardErkhov/TheTravellingEngineer_-_llama2-7b-hf-guanaco-gguf/blob/main/llama2-7b-hf-guanaco.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
The base model is meta's Llama-2-7b-hf. It was finetuned using SFT and the Guanaco dataset. The model prompt is similar to the original Guanaco model.
This repo contains the merged fp16 model.
**Legal Disclaimer: This model is bound by the usage restrictions of the original Llama-2 model. And comes with no warranty or gurantees of any kind.**
---
- license:
- llama2
- datasets:
- timdettmers/openassistant-guanaco
- language:
- en
- reference: https://gist.github.com/younesbelkada/9f7f75c94bdc1981c8ca5cc937d4a4da
---