metadata
tags:
- gguf
- llama.cpp
- quantized
- jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF
license: apache-2.0
jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF
This model was converted to GGUF format from jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF
using llama.cpp via
Convert Model to GGUF.
Key Features:
- Quantized for reduced file size (GGUF format)
- Optimized for use with llama.cpp
- Compatible with llama-server for efficient serving
Refer to the original model card for more details on the base model.
Usage with llama.cpp
1. Install llama.cpp:
brew install llama.cpp # For macOS/Linux
2. Run Inference:
CLI:
llama-cli --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -p "Your prompt here"
Server:
llama-server --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -c 2048
For more advanced usage, refer to the llama.cpp repository.