Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-7b - GGUF - Model creator: https://huggingface.co/huggyllama/ - Original model: https://huggingface.co/huggyllama/llama-7b/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-7b.Q2_K.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q2_K.gguf) | Q2_K | 2.36GB | | [llama-7b.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.IQ3_XS.gguf) | IQ3_XS | 2.6GB | | [llama-7b.IQ3_S.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.IQ3_S.gguf) | IQ3_S | 2.75GB | | [llama-7b.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q3_K_S.gguf) | Q3_K_S | 2.75GB | | [llama-7b.IQ3_M.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.IQ3_M.gguf) | IQ3_M | 2.9GB | | [llama-7b.Q3_K.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q3_K.gguf) | Q3_K | 3.07GB | | [llama-7b.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q3_K_M.gguf) | Q3_K_M | 3.07GB | | [llama-7b.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q3_K_L.gguf) | Q3_K_L | 3.35GB | | [llama-7b.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.IQ4_XS.gguf) | IQ4_XS | 3.4GB | | [llama-7b.Q4_0.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q4_0.gguf) | Q4_0 | 3.56GB | | [llama-7b.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.IQ4_NL.gguf) | IQ4_NL | 3.58GB | | [llama-7b.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q4_K_S.gguf) | Q4_K_S | 3.59GB | | [llama-7b.Q4_K.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q4_K.gguf) | Q4_K | 3.8GB | | [llama-7b.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q4_K_M.gguf) | Q4_K_M | 3.8GB | | [llama-7b.Q4_1.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q4_1.gguf) | Q4_1 | 3.95GB | | [llama-7b.Q5_0.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q5_0.gguf) | Q5_0 | 4.33GB | | [llama-7b.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q5_K_S.gguf) | Q5_K_S | 4.33GB | | [llama-7b.Q5_K.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q5_K.gguf) | Q5_K | 4.45GB | | [llama-7b.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q5_K_M.gguf) | Q5_K_M | 4.45GB | | [llama-7b.Q5_1.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q5_1.gguf) | Q5_1 | 4.72GB | | [llama-7b.Q6_K.gguf](https://huggingface.co/RichardErkhov/huggyllama_-_llama-7b-gguf/blob/main/llama-7b.Q6_K.gguf) | Q6_K | 5.15GB | Original model description: --- license: other --- This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form) but either lost your copy of the weights or got some trouble converting them to the Transformers format.