SmolLM2-1.7B-Instruct-GGUF
Original Model
HuggingFaceTB/SmolLM2-1.7B-Instruct
Run with LlamaEdge
LlamaEdge version: v0.14.15 and above
Prompt template
Prompt type:
chatml
Prompt string
<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant
Context size:
2048
Run as LlamaEdge service
wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM2-1.7B-Instruct-Q5_K_M.gguf \ llama-api-server.wasm \ --model-name SmolLM2-1.7B-Instruct \ --prompt-template chatml \ --ctx-size 2048
Run as LlamaEdge command app
wasmedge --dir .:. --nn-preload default:GGML:AUTO:SmolLM2-1.7B-Instruct-Q5_K_M.gguf \ llama-chat.wasm \ --prompt-template chatml \ --ctx-size 2048
Quantized GGUF Models
Name | Quant method | Bits | Size | Use case |
---|---|---|---|---|
SmolLM2-1.7B-Instruct-Q2_K.gguf | Q2_K | 2 | 675 MB | smallest, significant quality loss - not recommended for most purposes |
SmolLM2-1.7B-Instruct-Q3_K_L.gguf | Q3_K_L | 3 | 933 MB | small, substantial quality loss |
SmolLM2-1.7B-Instruct-Q3_K_M.gguf | Q3_K_M | 3 | 860 MB | very small, high quality loss |
SmolLM2-1.7B-Instruct-Q3_K_S.gguf | Q3_K_S | 3 | 777 MB | very small, high quality loss |
SmolLM2-1.7B-Instruct-Q4_0.gguf | Q4_0 | 4 | 991 MB | legacy; small, very high quality loss - prefer using Q3_K_M |
SmolLM2-1.7B-Instruct-Q4_K_M.gguf | Q4_K_M | 4 | 1.06 GB | medium, balanced quality - recommended |
SmolLM2-1.7B-Instruct-Q4_K_S.gguf | Q4_K_S | 4 | 999 MB | small, greater quality loss |
SmolLM2-1.7B-Instruct-Q5_0.gguf | Q5_0 | 5 | 1.19 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
SmolLM2-1.7B-Instruct-Q5_K_M.gguf | Q5_K_M | 5 | 1.23 GB | large, very low quality loss - recommended |
SmolLM2-1.7B-Instruct-Q5_K_S.gguf | Q5_K_S | 5 | 1.19 GB | large, low quality loss - recommended |
SmolLM2-1.7B-Instruct-Q6_K.gguf | Q6_K | 6 | 1.41 GB | very large, extremely low quality loss |
SmolLM2-1.7B-Instruct-Q8_0.gguf | Q8_0 | 8 | 1.82 GB | very large, extremely low quality loss - not recommended |
SmolLM2-1.7B-Instruct-f16.gguf | f16 | 16 | 3.42 GB |
Quantized with llama.cpp b4120
- Downloads last month
- 81
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for second-state/SmolLM2-1.7B-Instruct-GGUF
Base model
HuggingFaceTB/SmolLM2-1.7B
Quantized
HuggingFaceTB/SmolLM2-1.7B-Instruct