Edit model card

Meta-Llama-Guard-2-8B-GGUF

Quantized Meta-Llama-Guard-2-8B models using recent versions of llama.cpp.

Downloads last month
241
GGUF
Model size
8.03B params
Architecture
llama
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from