Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama-13b - GGUF

Name Quant method Size
llama-13b.Q2_K.gguf Q2_K 4.52GB
llama-13b.IQ3_XS.gguf IQ3_XS 4.99GB
llama-13b.IQ3_S.gguf IQ3_S 5.27GB
llama-13b.Q3_K_S.gguf Q3_K_S 5.27GB
llama-13b.IQ3_M.gguf IQ3_M 5.57GB
llama-13b.Q3_K.gguf Q3_K 5.9GB
llama-13b.Q3_K_M.gguf Q3_K_M 5.9GB
llama-13b.Q3_K_L.gguf Q3_K_L 6.45GB
llama-13b.IQ4_XS.gguf IQ4_XS 6.54GB
llama-13b.Q4_0.gguf Q4_0 6.86GB
llama-13b.IQ4_NL.gguf IQ4_NL 6.9GB
llama-13b.Q4_K_S.gguf Q4_K_S 6.91GB
llama-13b.Q4_K.gguf Q4_K 7.33GB
llama-13b.Q4_K_M.gguf Q4_K_M 7.33GB
llama-13b.Q4_1.gguf Q4_1 7.61GB
llama-13b.Q5_0.gguf Q5_0 8.36GB
llama-13b.Q5_K_S.gguf Q5_K_S 8.36GB
llama-13b.Q5_K.gguf Q5_K 8.6GB
llama-13b.Q5_K_M.gguf Q5_K_M 8.6GB
llama-13b.Q5_1.gguf Q5_1 9.1GB
llama-13b.Q6_K.gguf Q6_K 9.95GB

Original model description:

license: other

This contains the weights for the LLaMA-13b model. This model is under a non-commercial license (see the LICENSE file). You should only use this repository if you have been granted access to the model by filling out this form but either lost your copy of the weights or got some trouble converting them to the Transformers format.

Downloads last month
246
GGUF
Model size
13B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

Inference API
Unable to determine this model's library. Check the docs .