File size: 12,533 Bytes
9fade2a bfc5ccd 7261a26 bfc5ccd 7261a26 bfc5ccd 7261a26 bfc5ccd 7261a26 bfc5ccd 2654ca5 00b0a08 2654ca5 da4ab75 2654ca5 bfc5ccd 2654ca5 da4ab75 bfc5ccd b929465 4c92274 bfc5ccd 5cec4e6 4ddde86 e12ac43 4ddde86 bfc5ccd b929465 e12ac43 bfc5ccd a25d95b aa27052 39e2794 aa27052 a25d95b cf0d3f3 6d807bc 0c73edf bfc5ccd cf0d3f3 86e6270 c6b6f53 86e6270 f5cadb3 86e6270 997c569 50f62e4 997c569 0b1f8c9 bfc5ccd f420005 b92dddc bfc5ccd 8a3462c bfc5ccd b92dddc 2d4556c b92dddc bfc5ccd 8b5c98a 09bf795 8b5c98a bfc5ccd 4b4042c bfc5ccd fd620cd bfc5ccd 6799636 232a1d9 0730956 1166ace 0730956 bfc5ccd c9fcacb 7ef77c5 c9fcacb bfc5ccd c9fcacb bfc5ccd ab9235e 0730956 bfc5ccd ab9235e 19975bf bfc5ccd c1e761a bfc5ccd dab96cb e1b3574 bfc5ccd e1b3574 bfc5ccd 82981eb bfc5ccd 9fade2a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 |
<!DOCTYPE html>
<html lang="en-US">
<head>
<meta charset="UTF-8">
<!-- Begin Jekyll SEO tag v2.8.0 -->
<title>Gradient Cuff | Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by
Exploring Refusal Loss Landscapes </title>
<meta property="og:title" content="Gradient Cuff" />
<meta property="og:locale" content="en_US" />
<meta name="description" content="Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes" />
<meta property="og:description" content="Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes" />
<script type="application/ld+json">
{"@context":"https://schema.org","@type":"WebSite","description":"Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes","headline":"Gradient Cuff","name":"Gradient Cuff","url":"https://huggingface.co/spaces/gregH/Gradient Cuff"}</script>
<!-- End Jekyll SEO tag -->
<link rel="preconnect" href="https://fonts.gstatic.com">
<link rel="preload" href="https://fonts.googleapis.com/css?family=Open+Sans:400,700&display=swap" as="style" type="text/css" crossorigin>
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#157878">
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent">
<link rel="stylesheet" href="assets/css/bootstrap/bootstrap.min.css?v=90447f115a006bc45b738d9592069468b20e2551">
<link rel="stylesheet" href="assets/css/style.css?v=90447f115a006bc45b738d9592069468b20e2551">
<!-- start custom head snippets, customize with your own _includes/head-custom.html file -->
<link rel="stylesheet" href="assets/css/custom_style.css?v=90447f115a006bc45b738d9592069468b20e2551">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<link rel="stylesheet" href="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/themes/smoothness/jquery-ui.css">
<script src="https://ajax.googleapis.com/ajax/libs/jqueryui/1.12.1/jquery-ui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.9.4/Chart.js"></script>
<script src="assets/js/calibration.js?v=90447f115a006bc45b738d9592069468b20e2551"></script>
<!-- for mathjax support -->
<script src="https://polyfill.io/v3/polyfill.min.js?features=es6"></script>
<script id="MathJax-script" async src="https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js"></script>
<!-- end custom head snippets -->
</head>
<body>
<a id="skip-to-content" href="#content">Skip to the content.</a>
<header class="page-header" role="banner">
<h1 class="project-name">Gradient Cuff</h1>
<h2 class="project-tagline">Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes</h2>
</header>
<main id="content" class="main-content" role="main">
<h2 id="introduction">Introduction</h2>
<p>Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a
query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align
these LLMs to human values using advanced training techniques such as Reinforcement Learning from
Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial
jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge,
we define and investigate the <strong>Refusal Loss</strong> of LLMs and then propose a method called <strong>Gradient Cuff</strong> to
detect jailbreak attempts. In this demonstration, we first introduce the concept of "Jailbreak". Then we present the refusal loss
landscape and propose the Gradient Cuff based on the characteristics of this landscape. Lastly, we compare Gradient Cuff with other jailbreak defense
methods and show the defense performance.
</p>
<h2 id="what-is-jailbreak">What is Jailbreak?</h2>
<p>Jailbreak attacks involve maliciously inserting or replacing tokens in the user instruction or rewriting it to bypass and circumvent
the safety guardrails of aligned LLMs. A notable example is that a jailbroken LLM would be tricked into
generating hate speech targeting certain groups of people, as demonstrated below.</p>
<div class="container">
<div id="jailbreak-intro" class="row align-items-center jailbreak-intro-sec">
<img id="jailbreak-intro-img" src="./jailbreak.png" />
</div>
</div>
<h3 id="refusal-loss">Refusal Loss</h3>
<p>Current transformer-based LLMs will return different responses to the same query due to the randomness of
autoregressive sampling-based generation. With this randomness, it is an
interesting phenomenon that a malicious user query will sometimes be rejected by the target LLM, but
sometimes be able to bypass the safety guardrail. Based on this observation, we propose a new concept called Refusal Loss and visualize its 2-d
landscape below:
</p>
<div class="container jailbreak-intro-sec">
<div><img id="jailbreak-intro-img" src="./loss_landscape.png" /></div>
</div>
<p>
From the above plot, we find that the loss landscape is more precipitous for malicious queries than for benign queries, which implies that
the <strong>Refusal Loss</strong> tends to have a large gradient norm if the input represents a malicious query. This observation motivates our proposal of using
the gradient norm of <strong>Refusal Loss</strong> to detect jailbreak attempts that pass the initial filtering of rejecting the input query when the function value is under 0.5.
Below we present the definition of the <strong>Refusal Loss</strong> and how we approximate it's function value and gradient.
See more details about the concept, approximation, gradient estimation and landscape drawing of it in our paper.
</p>
<div id="refusal-loss-formula" class="container">
<div id="refusal-loss-formula-list" class="row align-items-center formula-list">
<a href="#ECE-formula" class="selected">Refusal Loss</a>
<a href="#SCE-formula">Refusal Loss Approximation</a>
<a href="#ACE-formula">Gradient Estimation</a>
<div style="clear: both"></div>
</div>
<div id="refusal-loss-formula-content" class="row align-items-center">
<span id="ECE-formula" class="formula" style="">
$$
\displaystyle
\begin{aligned}
\phi_\theta(x)&=1-\mathbb{E}_{y \sim T_\theta(x)} JB(y)\\
JB (y) &= \begin{cases}
1 \text{, if $y$ contains any jailbreak keyword;} \\
0 \text{, otherwise.}
\end{cases}
\end{aligned}
$$
</span>
<span id="SCE-formula" class="formula" style="display: none;">
$$
\displaystyle
\begin{aligned}
f_\theta(x) &=1-\frac{1}{N}\sum_{i=1}^N JB(y_i)\\
JB (y_i) &= \begin{cases}
1 \text{, if $y_i$ contains any jailbreak keyword;} \\
0 \text{, otherwise.}
\end{cases}
\end{aligned}
$$
</span>
<span id="ACE-formula" class="formula" style="display: none;">$$\displaystyle g_\theta(x)=\sum_{i=1}^P \frac{f_\theta(x\oplus \mu u_i)-f_\theta(x)}{\mu} u_i $$</span>
</div>
</div>
<h2 id="proposed-approach-gradient-cuff">Proposed Approach: Gradient Cuff</h2>
<p> With the exploration of the Refusal Loss landscape, we propose Gradient Cuff,
a two-step jailbreak detection method based on checking the refusal loss and its gradient norm. Our detection procedure is shown below:
</p>
<div class="container"><img id="gradient-cuff-header" src="./gradient_cuff.png" /></div>
<p>
Gradient Cuff can be summarized into two phases:
<span>
<strong>(Phase 1) Sampling-based Rejection:</strong> In the first step, we reject the user query by checking whether $f_\theta(x)<0.5$. If true, then $x$ is rejected, otherwise, $x$ is pushed into phase 2.
</p>
<p>
<strong>(Phase 2) Gradient Norm Rejection:</strong> In the second step, we regard $x$ as having jailbreak attempts if the norm of the estimated gradient $g_\theta(x)$ is larger than a configurable threshold $t$, i.e., $\|g_\theta(x)\| > t$.
</p>
<h2 id="demonstration">Demonstration</h2>
<p>We evaluated Gradient Cuff as well as 4 baselines (Perplexity Filter, SmoothLLM, Erase-and-Check, and Self-Reminder) against 6
different jailbreak attacks~(GCG, AutoDAN, PAIR, TAP, Base64, and LRL) and benign user queries on 2 LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5).
We demonstrate the average refusal rate across these 6 malicious user query datasets and the refusal rate
on benign user queries as the Benign Refusal Rate.
</p>
<div id="jailbreak-demo" class="container">
<div class="row align-items-center">
<div class="row" style="margin: 10px 0 0">
<div class="models-list">
<span style="margin-right: 1em;">Models</span>
<span class="radio-group"><input type="radio" id="LLaMA2" class="options" name="models" value="llama2_7b_chat" checked="" /><label for="LLaMA2" class="option-label">LLaMA-2-7B-Chat</label></span>
<span class="radio-group"><input type="radio" id="Vicuna" class="options" name="models" value="vicuna_7b_v1.5" /><label for="Vicuna" class="option-label">Vicuna-7B-V1.5</label></span>
</div>
</div>
</div>
<div class="row align-items-center">
<div class="col-4">
<div id="defense-methods">
<div class="row align-items-center"><input type="radio" id="defense_ppl" class="options" name="defense" value="ppl" /><label for="defense_ppl" class="defense">Perplexity Filter</label></div>
<div class="row align-items-center"><input type="radio" id="defense_smoothllm" class="options" name="defense" value="smoothllm" /><label for="defense_smoothllm" class="defense">SmoothLLM</label></div>
<div class="row align-items-center"><input type="radio" id="defense_erase_check" class="options" name="defense" value="erase_check" /><label for="defense_erase_check" class="defense">Erase-Check</label></div>
<div class="row align-items-center"><input type="radio" id="defense_self_reminder" class="options" name="defense" value="self_reminder" /><label for="defense_self_reminder" class="defense">Self-Reminder</label></div>
<div class="row align-items-center"><input type="radio" id="defense_gradient_cuff" class="options" name="defense" value="gradient_cuff" /><label for="defense_gradient_cuff" class="defense"><span style="font-weight: bold;">Gradient Cuff</span></label></div>
</div>
<div class="row align-items-center">
<div class="attack-success-rate"><span class="jailbreak-metric">Average Malicious Refusal Rate</span><span class="attack-success-rate-value" id="asr-value">0.95875</span></div>
</div>
<div class="row align-items-center">
<div class="benign-refusal-rate"><span class="jailbreak-metric">Benign Refusal Rate</span><span class="benign-refusal-rate-value" id="brr-value">0.05000</span></div>
</div>
</div>
<div class="col-8">
<figure class="figure">
<img id="reliability-diagram" src="demo_results/gradient_cuff_llama2_7b_chat_threshold_100.png" alt="CIFAR-100 Calibrated Reliability Diagram (Full)" />
<div class="slider-container">
<div class="slider-label"><span>Perplexity Threshold</span></div>
<div class="slider-content" id="ppl-slider"><div id="ppl-threshold" class="ui-slider-handle"></div></div>
</div>
<div class="slider-container">
<div class="slider-label"><span>Gradient Threshold</span></div>
<div class="slider-content" id="gradient-norm-slider"><div id="gradient-norm-threshold" class="slider-value ui-slider-handle"></div></div>
</div>
<figcaption class="figure-caption">
</figcaption>
</figure>
</div>
</div>
</div>
<h2 id="citations">Citations</h2>
<p>If you find Gradient Cuff helpful and useful for your research, please cite our main paper as follows:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@misc{xxx,
title={{Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by
Exploring Refusal Loss Landscapes}},
author={Xiaomeng Hu and Pin-Yu Chen and Tsung-Yi Ho},
year={2024},
eprint={},
archivePrefix={arXiv},
primaryClass={}
}
</code></pre></div></div>
<footer class="site-footer">
<span class="site-footer-owner">Gradient Cuff is maintained by <a href="https://gregxmhu.github.io/">Xiaomeng Hu</a></a>.</span>
</footer>
</main>
</body>
</html>
|