Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama-7b - GGUF

Name Quant method Size
llama-7b.Q2_K.gguf Q2_K 2.36GB
llama-7b.IQ3_XS.gguf IQ3_XS 2.6GB
llama-7b.IQ3_S.gguf IQ3_S 2.75GB
llama-7b.Q3_K_S.gguf Q3_K_S 2.75GB
llama-7b.IQ3_M.gguf IQ3_M 2.9GB
llama-7b.Q3_K.gguf Q3_K 3.07GB
llama-7b.Q3_K_M.gguf Q3_K_M 3.07GB
llama-7b.Q3_K_L.gguf Q3_K_L 3.35GB
llama-7b.IQ4_XS.gguf IQ4_XS 3.4GB
llama-7b.Q4_0.gguf Q4_0 3.56GB
llama-7b.IQ4_NL.gguf IQ4_NL 3.58GB
llama-7b.Q4_K_S.gguf Q4_K_S 3.59GB
llama-7b.Q4_K.gguf Q4_K 3.8GB
llama-7b.Q4_K_M.gguf Q4_K_M 3.8GB
llama-7b.Q4_1.gguf Q4_1 3.95GB
llama-7b.Q5_0.gguf Q5_0 4.33GB
llama-7b.Q5_K_S.gguf Q5_K_S 4.33GB
llama-7b.Q5_K.gguf Q5_K 4.45GB
llama-7b.Q5_K_M.gguf Q5_K_M 4.45GB
llama-7b.Q5_1.gguf Q5_1 4.72GB
llama-7b.Q6_K.gguf Q6_K 5.15GB

Original model description:

license: other

This contains the weights for the LLaMA-7b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out this form but either lost your copy of the weights or got some trouble converting them to the Transformers format.

Downloads last month
47
GGUF
Model size
6.74B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Unable to determine this model's library. Check the docs .