Edit model card

Model Card for LlamaGuard-7b-GGUF

This is a quantized model for meta-llama/LlamaGuard-7b. Two quantization methods were used:

  • Q5_K_M: 5-bit, preserves most of the model's performance
  • Q4_K_M: 4-bit, smaller footprints and saves more memory

Model Details

Model Description

Refer to details from [Meta's official model card]: (https://huggingface.co/meta-llama/LlamaGuard-7b).

Downloads last month
52
GGUF
Model size
6.74B params
Architecture
llama