Papers
arxiv:2407.16637

Course-Correction: Safety Alignment Using Synthetic Preferences

Published on Jul 23
· Submitted by pillowsofwind on Jul 26
Authors:
,
,
,
,
,

Abstract

The risk of harmful content generated by large language models (LLMs) becomes a critical concern. This paper presents a systematic study on assessing and improving LLMs' capability to perform the task of course-correction, \ie, the model can steer away from generating harmful content autonomously. To start with, we introduce the C^2-Eval benchmark for quantitative assessment and analyze 10 popular LLMs, revealing varying proficiency of current safety-tuned LLMs in course-correction. To improve, we propose fine-tuning LLMs with preference learning, emphasizing the preference for timely course-correction. Using an automated pipeline, we create C^2-Syn, a synthetic dataset with 750K pairwise preferences, to teach models the concept of timely course-correction through data-driven preference learning. Experiments on 2 LLMs, Llama2-Chat 7B and Qwen2 7B, show that our method effectively enhances course-correction skills without affecting general performance. Additionally, it effectively improves LLMs' safety, particularly in resisting jailbreak attacks.

Community

Paper author Paper submitter

Our latest paper delves into LLMs' ability to perform safety self-correction, namely COURSE-CORRECTION.

In this paper, we:

  • Benchmark course-correction ability
  • Improving using synthetic preferences.

Paper: https://arxiv.org/pdf/2407.16637
Code: https://github.com/pillowsofwind/Course-Correction

(Figure 2) 🔰To start with, we quantitatively assess current open-source LLMs' ability to perform safety course-correction by counting the CORRECTIVE decoding paths.

(Figure 3) 📐After evaluating 10 LLMs, we found some characteristics:

  • The course-correction capabilities of different SAFETY-tuned models vary widely. 😰
  • For some LLMs, the more initially harmful content, the EASIER it is to perform course-correct. 😹

(Figure 4) 🏹To improve, the strategy is really simple, we craft 750K synthetic preferences by following two value principles:

  • Correction is better than not
  • Early correction is better than a later one

(Figure 5,6,7)We apply our synthetic data to DPO training and find that:
👉improve the course-correction ability and overall safety
👉improve the robustness of 4 jailbreak attacks
👉no harm to overall performance
👉lifted safety token probs on later decoding positions

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.16637 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.16637 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.16637 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.