RoLlama2 GGUF
Collection
GGUF variants of https://huggingface.co/collections/OpenLLM-Ro/rollama2-664722bbf536ec14701ec81d
β’
6 items
β’
Updated
Llama.cpp imatrix quantization of RoLlama2-7b-Chat-IMat-GGUF
Original Model: OpenLLM-Ro/RoLlama2-7b-Chat
Original dtype: FP32
(float32
)
Quantized by: llama.cpp b2998
IMatrix dataset: here
Status: β
Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
RoLlama2-7b-Chat.Q8_0.gguf | Q8_0 | 7.16GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.Q6_K.gguf | Q6_K | 5.53GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.Q4_K.gguf | Q4_K | 4.08GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.Q3_K.gguf | Q3_K | 3.30GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.Q2_K.gguf | Q2_K | 2.53GB | β Available | π’ Yes | π¦ No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
RoLlama2-7b-Chat.FP16.gguf | F16 | 13.48GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.BF16.gguf | BF16 | 13.48GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.Q5_K.gguf | Q5_K | 4.78GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.Q5_K_S.gguf | Q5_K_S | 4.65GB | β Available | βͺ No | π¦ No |
RoLlama2-7b-Chat.Q4_K_S.gguf | Q4_K_S | 3.86GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.Q3_K_L.gguf | Q3_K_L | 3.60GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.Q3_K_S.gguf | Q3_K_S | 2.95GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.Q2_K_S.gguf | Q2_K_S | 2.32GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ4_NL.gguf | IQ4_NL | 3.83GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ4_XS.gguf | IQ4_XS | 3.62GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ3_M.gguf | IQ3_M | 3.11GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ3_S.gguf | IQ3_S | 2.95GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ3_XS.gguf | IQ3_XS | 2.80GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ3_XXS.gguf | IQ3_XXS | 2.59GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ2_M.gguf | IQ2_M | 2.36GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ2_S.gguf | IQ2_S | 2.20GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ2_XS.gguf | IQ2_XS | 2.03GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ2_XXS.gguf | IQ2_XXS | 1.85GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ1_M.gguf | IQ1_M | 1.65GB | β Available | π’ Yes | π¦ No |
RoLlama2-7b-Chat.IQ1_S.gguf | IQ1_S | 1.53GB | β Available | π’ Yes | π¦ No |
First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Then, you can target the specific file you want:
huggingface-cli download legraphista/RoLlama2-7b-Chat-IMat-GGUF --include "RoLlama2-7b-Chat.Q8_0.gguf" --local-dir ./
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/RoLlama2-7b-Chat-IMat-GGUF --include "RoLlama2-7b-Chat.Q8_0/*" --local-dir RoLlama2-7b-Chat.Q8_0
# see FAQ for merging GGUF's
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
RoLlama2-7b-Chat.Q8_0
)gguf-split --merge RoLlama2-7b-Chat.Q8_0/RoLlama2-7b-Chat.Q8_0-00001-of-XXXXX.gguf RoLlama2-7b-Chat.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!