Edit model card

Quantized version of meta-llama/LlamaGuard-7b

Model Description

The model meta-llama/LlamaGuard-7b was quantized to 4bit, group_size 128, and act-order=True with auto-gptq integration in transformers (https://huggingface.co/blog/gptq-integration).

Evaluation

To evaluate the qunatized model and compare it with the full precision model, I performed binary classification on the "toxicity" label from the ~5k samples test set of lmsys/toxic-chat.

📊 Full Precision Model:

Average Precision Score: 0.3625

📊 4-bit Quantized Model:

Average Precision Score: 0.3450

Downloads last month
3
Safetensors
Model size
1.13B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with SebastianSchramm/LlamaGuard-7b-GPTQ-4bit-128g-actorder_True.
Inference API (serverless) has been turned off for this model.

Quantized from