--- tags: - gguf - llama.cpp - quantized - jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF license: apache-2.0 --- # jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF This model was converted to GGUF format from [`jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF`](https://huggingface.co/jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF) using llama.cpp via [Convert Model to GGUF](https://github.com/ruslanmv/convert-model-to-GGUF). **Key Features:** * Quantized for reduced file size (GGUF format) * Optimized for use with llama.cpp * Compatible with llama-server for efficient serving Refer to the [original model card](https://huggingface.co/jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF) for more details on the base model. ## Usage with llama.cpp **1. Install llama.cpp:** ```bash brew install llama.cpp # For macOS/Linux ``` **2. Run Inference:** **CLI:** ```bash llama-cli --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -p "Your prompt here" ``` **Server:** ```bash llama-server --hf-repo jsjeon/SummLlama3.2-3B-Q4_K_M-GGUF --hf-file SummLlama3.2-3B-Q4_K_M-GGUF-4bit.gguf -c 2048 ``` For more advanced usage, refer to the [llama.cpp repository](https://github.com/ggerganov/llama.cpp).