Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Access to this model is automatically granted upon accepting the AI2 Responsible Use Guidelines, and completing all fields below

Log in or Sign Up to review the conditions and access this model content.

Model Card for llama2-7b-WildJailbreak

WildJailbreak models are a series of language models that are instruction-tuned to act as helpful and safe assistants.

For more details, read the paper: WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models.

Model description

  • Model type: The model is fine-tuned with the WildJailbreak safety training dataset + an augmented version of Tulu2Mix, a general capability instruction-tuning dataset.
  • Model size: 7B
  • Language(s) (NLP): English
  • License: Apache 2.0.
  • Finetuned from model: meta-llama/Llama-2-7b-hf

Results

Please refer to our paper for the full detail of model results.

drawing

Intended uses & limitations

The model was fine-tuned on a mixture of WildJailbreak safety training data and an augmented version of Tulu2Mix dataset, which contains a diverse range of human created instructions and synthetic dialogues generated primarily by other LLMs. Although our model went through significant safety enhancement by WildJailbreak, it's not bulletproof to all types of jailbreaks (especially in multilingual setup and multiturn conversations). We hope that by open-sourcing safety-trained models and their safety training resources, we can facilitate a new arena of LLM safety studies regarding the limitations and promises of LLM safety, tailored to models with enhanced safety ability.

Training details

drawing

Citation

If you find this resource useful in your work, please cite it with:

@misc{wildteaming2024,
      title={WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models}, 
      author={Liwei Jiang and Kavel Rao and Seungju Han and Allyson Ettinger and Faeze Brahman and Sachin Kumar and Niloofar Mireshghallah and Ximing Lu and Maarten Sap and Yejin Choi and Nouha Dziri},
      year={2024},
      eprint={2406.18510},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2406.18510}, 
}
Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for allenai/llama2-7b-WildJailbreak

Quantizations
1 model

Collection including allenai/llama2-7b-WildJailbreak