Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge - GGUF

Name Quant method Size
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q2_K.gguf Q2_K 4.6GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.IQ3_XS.gguf IQ3_XS 5.08GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.IQ3_S.gguf IQ3_S 5.36GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q3_K_S.gguf Q3_K_S 5.36GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.IQ3_M.gguf IQ3_M 5.66GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q3_K.gguf Q3_K 5.99GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q3_K_M.gguf Q3_K_M 5.99GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q3_K_L.gguf Q3_K_L 6.54GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.IQ4_XS.gguf IQ4_XS 6.63GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q4_0.gguf Q4_0 6.95GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.IQ4_NL.gguf IQ4_NL 7.0GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q4_K_S.gguf Q4_K_S 7.01GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q4_K.gguf Q4_K 7.42GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q4_K_M.gguf Q4_K_M 7.42GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q4_1.gguf Q4_1 7.71GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q5_0.gguf Q5_0 8.46GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q5_K_S.gguf Q5_K_S 8.46GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q5_K.gguf Q5_K 8.7GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q5_K_M.gguf Q5_K_M 8.7GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q5_1.gguf Q5_1 9.21GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q6_K.gguf Q6_K 10.06GB
llama-2-koen-13b-gangkk-alpaca-cosine-all-epoch3-merge.Q8_0.gguf Q8_0 13.03GB

Original model description: Entry not found

Downloads last month
131
GGUF
Model size
13.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .