tolgadev/llama-2-7b-ruyallm - GGUF
This repo contains GGUF format model files for tolgadev/llama-2-7b-ruyallm.
The files were quantized using machines provided by TensorBlock, and they are compatible with llama.cpp as of commit b4011.
Prompt template
Model file specification
Filename |
Quant type |
File Size |
Description |
llama-2-7b-ruyallm-Q2_K.gguf |
Q2_K |
2.413 GB |
smallest, significant quality loss - not recommended for most purposes |
llama-2-7b-ruyallm-Q3_K_S.gguf |
Q3_K_S |
2.804 GB |
very small, high quality loss |
llama-2-7b-ruyallm-Q3_K_M.gguf |
Q3_K_M |
3.130 GB |
very small, high quality loss |
llama-2-7b-ruyallm-Q3_K_L.gguf |
Q3_K_L |
3.409 GB |
small, substantial quality loss |
llama-2-7b-ruyallm-Q4_0.gguf |
Q4_0 |
3.628 GB |
legacy; small, very high quality loss - prefer using Q3_K_M |
llama-2-7b-ruyallm-Q4_K_S.gguf |
Q4_K_S |
3.657 GB |
small, greater quality loss |
llama-2-7b-ruyallm-Q4_K_M.gguf |
Q4_K_M |
3.865 GB |
medium, balanced quality - recommended |
llama-2-7b-ruyallm-Q5_0.gguf |
Q5_0 |
4.403 GB |
legacy; medium, balanced quality - prefer using Q4_K_M |
llama-2-7b-ruyallm-Q5_K_S.gguf |
Q5_K_S |
4.403 GB |
large, low quality loss - recommended |
llama-2-7b-ruyallm-Q5_K_M.gguf |
Q5_K_M |
4.525 GB |
large, very low quality loss - recommended |
llama-2-7b-ruyallm-Q6_K.gguf |
Q6_K |
5.226 GB |
very large, extremely low quality loss |
llama-2-7b-ruyallm-Q8_0.gguf |
Q8_0 |
6.769 GB |
very large, extremely low quality loss - not recommended |
Downloading instruction
Command line
Firstly, install Huggingface Client
pip install -U "huggingface_hub[cli]"
Then, downoad the individual model file the a local directory
huggingface-cli download tensorblock/llama-2-7b-ruyallm-GGUF --include "llama-2-7b-ruyallm-Q2_K.gguf" --local-dir MY_LOCAL_DIR
If you wanna download multiple model files with a pattern (e.g., *Q4_K*gguf
), you can try:
huggingface-cli download tensorblock/llama-2-7b-ruyallm-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'