---
language:
- en
license: apache-2.0
extra_gated_prompt:
Access to this model is automatically granted upon accepting the [AI2
Responsible Use Guidelines](https://allenai.org/responsible-use.pdf), and
completing all fields below
extra_gated_fields:
Your full name: text
Organization or entity you are affiliated with: text
State or country you are located in: text
Contact email: text
Please describe your intended use of the low risk artifact(s): text
I understand that this model is a research artifact that may contain or produce unfiltered, toxic, or harmful material: checkbox
I agree to use this model for research purposes in accordance with the AI2 Responsible Use Guidelines: checkbox
I agree that AI2 may use my information as described in the Privacy Policy: checkbox
I certify that the information I have provided is true and accurate: checkbox
---
## Model Card for llama2-13b-WildJailbreak
WildJailbreak models are a series of language models that are instruction-tuned to act as helpful and safe assistants.
For more details, read the paper: [WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models](https://arxiv.org/abs/2406.18510).
## Model description
- **Model type:** The model is fine-tuned with the [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak) safety training dataset + an augmented version of [Tulu2Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), a general capability instruction-tuning dataset.
- **Model size:** 13B
- **Language(s) (NLP):** English
- **License:** Apache 2.0.
- **Finetuned from model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
## Results
Please refer to our [paper](https://arxiv.org/abs/2406.18510) for the full detail of model results.
## Intended uses & limitations
The model was fine-tuned on a mixture of [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak) safety training data and an augmented version of [Tulu2Mix](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs.
Although our model went through significant safety enhancement by [WildJailbreak](https://huggingface.co/datasets/allenai/wildjailbreak), it's not bulletproof to all types of jailbreaks (especially in multilingual setup and multiturn conversations).
We hope that by open-sourcing safety-trained models and their safety training resources, we can facilitate a new arena of LLM safety studies regarding the limitations and promises of LLM safety, tailored to models with enhanced safety ability.
### Training details
## Citation
If you find this resource useful in your work, please cite it with:
```
@misc{wildteaming2024,
title={WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models},
author={Liwei Jiang and Kavel Rao and Seungju Han and Allyson Ettinger and Faeze Brahman and Sachin Kumar and Niloofar Mireshghallah and Ximing Lu and Maarten Sap and Yejin Choi and Nouha Dziri},
year={2024},
eprint={2406.18510},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2406.18510},
}
```