Model Information
Llama Guard 3-1B is a fine-tuned Llama-3.2-1B pretrained model for content safety classification. Similar to previous versions, it can be used to classify content in both LLM inputs (prompt classification) and in LLM responses (response classification). It acts as an LLM – it generates text in its output that indicates whether a given prompt or response is safe or unsafe, and if unsafe, it also lists the content categories violated.
Llama Guard 3-1B was aligned to safeguard against the MLCommons standardized hazards taxonomy and designed to lower the deployment cost of moderation system safeguard compared to its predecessors. It comes in two versions : 1B and 1B pruned and quantized, optimized for deployment on mobile devices.
Get started
This repository contains two versions of Llama-Guard-3-1B, for use with transformers and with the original llama
codebase. Once you have access to the model weights, follow the appropriate section.
Use with transformers
Starting with transformers >= 4.43.0 onward, you can run inference evaluate the last user or assistant response in a multi-turn conversation.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "meta-llama/Llama-Guard-3-1B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
conversation = [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is the recipe for mayonnaise?"
},
],
}
]
input_ids = tokenizer.apply_chat_template(
conversation, return_tensors="pt"
).to(model.device)
prompt_len = input_ids.shape[1]
output = model.generate(
input_ids,
max_new_tokens=20,
pad_token_id=0,
)
generated_tokens = output[:, prompt_len:]
print(tokenizer.decode(generated_tokens[0]))
This snippet will use the categories described in this model card. You can provide your own categories instead:
input_ids = tokenizer.apply_chat_template(
conversation,
return_tensors="pt",
categories = {
"S1": "My custom category",
},
).to(model.device)
Or you can exclude categories from the default list by specifying an array of category keys to exclude:
input_ids = tokenizer.apply_chat_template(
conversation,
return_tensors="pt",
excluded_category_keys=["S6"],
).to(model.device)
Use with llama
Please, refer to our documentation to get started.
To download the original checkpoints, you can use huggingface-cli
as follows:
huggingface-cli download meta-llama/Llama-Guard-3-1B --include "original/*" --local-dir Llama-3.2-11B-Vision
Fine-tuning
You can also fine tune Llama Guard for your use case: Llama Guard 3 Customization: Taxonomy Customization, Zero/Few-shot prompting, Evaluation and Fine Tuning
Hazard Taxonomy and Policy
The model is trained to predict safety labels on the 13 categories shown below, based on the MLCommons taxonomy of 13 hazards.
S1: Violent Crimes | S2: Non-Violent Crimes |
S3: Sex-Related Crimes | S4: Child Sexual Exploitation |
S5: Defamation | S6: Specialized Advice |
S7: Privacy | S8: Intellectual Property |
S9: Indiscriminate Weapons | S10: Hate |
S11: Suicide & Self-Harm | S12: Sexual Content |
S13: Elections |
S1 - Violent Crimes
Responses that enable, encourage, or endorse the commission of violent crimes, including: (1) unlawful violence toward people (ex: terrorism, genocide, murder, hate-motivated violent crimes, child abuse, assault, battery, kidnapping) and (2) unlawful violence toward animals (ex: animal abuse)
S2 - Non-Violent Crimes
Responses that enable, encourage, or endorse the commission of non-violent crimes, including: (1) personal crimes (ex: labor trafficking, threats, intimidation, hate-motivated non-violent crimes), (2) financial crimes (ex: fraud, scams, money laundering), (3) property crimes (ex: theft, arson, vandalism), (4) drug crimes (ex: creating or using regulated substances), (5) weapons crimes (ex: producing unlicensed firearms), and (6) cyber crimes (ex: hacking)
S3 - Sex-Related Crimes
Responses that enable, encourage, or endorse the commission of sex-related crimes, including: (1) sex trafficking, (2) sexual assault (ex: rape), (3) sexual harassment (physical, verbal, or visual), and (4) prostitution
S4 - Child Sexual Exploitation
Responses that contain, describe, enable, encourage, or endorse the sexual abuse of children
S5 - Defamation
Responses that are both verifiably false and likely to injure a living person’s reputation
S6 - Specialized Advice
Responses that contain specialized financial, medical, or legal advice, or that indicate dangerous activities or objects are safe
S7 - Privacy
Responses that contain sensitive, nonpublic personal information that could undermine someone’s physical, digital, or financial security
S8 - Intellectual Property
Responses that may violate the intellectual property rights of any third party
S9 - Indiscriminate Weapons
Responses that enable, encourage, or endorse the creation of indiscriminate weapons, including: (1) chemical weapons (ex: nerve gas), (2) biological weapons (ex: anthrax), (3) radiological weapons (ex: cobalt bombs), (4) nuclear weapons (ex: fission bombs), and (5) high-yield explosive weapons (ex: cluster munitions)
S10 - Hate
Responses that demean or dehumanize people on the basis of their sensitive, personal characteristics (i.e., race, color, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity, and/or serious disease)
S11 - Suicide & Self-Harm
Responses that enable, encourage, or endorse acts of intentional self-harm, including: (1) suicide, (2) self-injury (ex: cutting), and (3) disordered eating
S12 - Sexual Content
Responses that contain erotica
S13 - Elections
Responses that contain factually incorrect information about electoral systems and processes, including in the time, place, or manner of voting in civic elections
Supported languages
Llama Guard 3-1B supports content safety for the following languages: English, French, German, Hindi, Italian, Portuguese, Spanish, Thai.
Training Data
We use the English data used by Llama Guard [1], which are obtained by getting Llama 2 and Llama 3 generations on prompts from the hh-rlhf dataset [2]. In order to scale training data for multilingual capability, we collect additional human and synthetically generated data. Similar to the English data, the multilingual data are Human-AI conversation data that are either single-turn or multi-turn. To reduce the model’s false positive rate, we curate a set of multilingual benign prompt and response data where LLMs likely reject the prompts.
Pruning
To reduce the number of model parameters, we prune the model along two dimensions: number of layers and MLP hidden dimension. The methodology is quite similar to [5], and proceeds in 3 stages: 1) pruning metric calibration; 2) model pruning; 3) finetuning the pruned model. During calibration, we collect pruning metric statistics by passing ~1k batches of inputs through the model. We use the block importance metric [6] for pruning the decoder layers and the average l2 norm for MLP hidden neurons for MLP hidden dimension pruning. After calibrating the pruning metrics, we prune the model to 12 layers and 6400 MLP hidden dimension, such that the pruned model has 1123 million parameters. Finally, we finetune the pruned model on the training data.
Distillation
Building on a similar approach in [5], we employ Llama Guard 3-8B as a teacher model to fine-tune the pruned model through logit-level distillation during supervised training. We observe that simply incorporating logit-level distillation significantly enhances the model's ability to learn safe and unsafe patterns, as well as the distribution of unsafe reasoning, from the 8B teacher. Consequently, the final result shows substantial improvement after applying logit-level fine-tuning.
Output Layer Pruning
The Llama Guard model is trained to generate 128k output tokens out of which only 20 tokens (e.g. safe, unsafe, S, 1,...) are used. By keeping the model connections corresponding to those 20 tokens in the output linear layer and pruning out the remaining connections we can reduce the output layer size significantly without impacting the model outputs. Using output layer pruning, we reduced the output layer size from 262.6M parameters (2048x128k) to 40.96k parameters (2048x20), giving us a total savings of 131.3MB with 4-bit quantized weights. Although the pruned output layer only generates 20 tokens, they are expanded back to produce the original 128k outputs in the model.
Evaluation
Note on evaluations: As discussed in the original Llama Guard paper, comparing model performance is not straightforward as each model is built on its own policy and is expected to perform better on an evaluation dataset with a policy aligned to the model. This highlights the need for industry standards. By aligning the Llama Guard family of models with the Proof of Concept MLCommons taxonomy of hazards, we hope to drive adoption of industry standards like this and facilitate collaboration and transparency in the LLM safety and content evaluation space.
We evaluate the performance of Llama Guard 1B models on MLCommons hazard taxonomy and compare it across languages with Llama Guard 3-8B on our internal test. We also add GPT4 as baseline with zero-shot prompting using MLCommons hazard taxonomy.
Model | |||||||||||
English | French | German | Italian | Spanish | Portuguese | Hindi | Vietnamese | Indonesian | Thai | XSTest | |
Llama Guard 3-8B | 0.939/0.040 | 0.943/0.036 | 0.877/0.032 | 0.873/0.038 | 0.875/0.023 | 0.860/0.060 | 0.871/0.050 | 0.890/0.034 | 0.915/0.048 | 0.834/0.030 | 0.884/0.044 |
Llama Guard 3-1B | 0.899/0.090 | 0.939/0.012 | 0.845/0.036 | 0.897/0.111 | 0.837/0.083 | 0.763/0.114 | 0.680/0.057 | 0.723/0.130 | 0.875/0.083 | 0.749/0.078 | 0.821/0.068 |
Llama Guard 3-1B -INT4 | 0.904/0.084 | 0.873/0.072 | 0.835/0.145 | 0.897/0.111 | 0.852/0.104 | 0.830/0.109 | 0.564/0.114 | 0.792/0.171 | 0.833/0.121 | 0.831/0.114 | 0.737/0.152 |
GPT4 | 0.805/0.152 | 0.795/0.157 | 0.691/0.123 | 0.753/0.20 | 0.711/0.169 | 0.738/0.207 | 0.709/0.206 | 0.741/0.148 | 0.787/0.169 | 0.688/0.168 | 0.895/0.128 |
Limitations
There are some limitations associated with Llama Guard 3-1B. First, Llama Guard 3-1B itself is an LLM fine-tuned on Llama 3.2. Thus, its performance (e.g., judgments that need common sense knowledge, multilingual capability, and policy coverage) might be limited by its (pre-)training data.
Llama Guard performance varies across model size and languages. When possible, developers should consider Llama Guard 3-8B which may provide better safety classification performance but comes at a higher deployment cost. Please refer to the evaluation section and test the safeguards before deployment to ensure it meets the safety requirement of your application.
Some hazard categories may require factual, up-to-date knowledge to be evaluated (for example, S5: Defamation, S8: Intellectual Property, and S13: Elections). We believe more complex systems should be deployed to accurately moderate these categories for use cases highly sensitive to these types of hazards, but Llama Guard 3-1B provides a good baseline for generic use cases.
Lastly, as an LLM, Llama Guard 3-1B may be susceptible to adversarial attacks or prompt injection attacks that could bypass or alter its intended use. Please report vulnerabilities and we will look to incorporate improvements in future versions of Llama Guard.
References
[1] Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
[2] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
[3] Llama Guard 3-8B Model Card
[4] XSTest: A Test Suite for Identifying Exaggerated Safety Behaviors in Large Language Models
[5] Compact Language Models via Pruning and Knowledge Distillation
[6] ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Citation
@misc{metallamaguard3,
author = {Llama Team, AI @ Meta},
title = {The Llama 3 Family of Models},
howpublished = {\url{https://github.com/meta-llama/PurpleLlama/blob/main/Llama-Guard3/1B/MODEL_CARD.md}},
year = {2024}
}
- Downloads last month
- 341