ReasoningShield commited on
Commit
4e89c39
Β·
verified Β·
1 Parent(s): cb88cb6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -34
README.md CHANGED
@@ -50,8 +50,8 @@ datasets:
50
  </a>
51
 
52
  <!-- License -->
53
- <a href="https://www.apache.org/licenses/LICENSE-2.0 " target="_blank">
54
- <img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? ">
55
  </a>
56
 
57
  </div>
@@ -61,18 +61,16 @@ datasets:
61
 
62
  ## πŸ›‘ 1. Model Overview
63
 
64
- ***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs) before generating final answers. It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety alignment for LRMs.
65
-
66
- - **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
67
 
68
  - **Key Features** :
69
- - **High Performance**: Achieves an average F1 score exceeding **92%** in QT Moderation tasks, outperforming existing models across both in-distribution (ID) and out-of-distribution (OOD) test sets.
70
 
71
- - **Enhanced Explainability** : Employs a structured analysis process that improves decision transparency and provides clearer insights into safety assessments.
72
 
73
- - **Robust Generalization** : Demonstrates competitive performance in traditional QA Moderation tasks despite being trained exclusively on a 7K-sample QT dataset.
74
 
75
- - **Efficient Design** : Built on compact 1B/3B base models, requiring only **2.30 GB/5.98 GB** GPU memory during inference, facilitating cost-effective deployment on resource-constrained devices.
76
 
77
  - **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
78
 
@@ -87,25 +85,24 @@ datasets:
87
  </div>
88
 
89
 
90
- - The model is trained on a high-quality dataset of 7,000 QT pairs, please refer to the following link for detailed information:
91
  - ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
92
 
93
  - **Risk Categories** :
94
 
95
- - Violence & Physical Harm
96
  - Hate & Toxicity
97
  - Deception & Misinformation
98
- - Rights-Related Risks
99
- - Sexual Content & Exploitation
100
- - Child-Related Harm
101
- - Cybersecurity & Malware Threats
102
  - Prohibited Items
103
  - Economic Harm
104
  - Political Risks
105
- - Safe
106
  - Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
107
 
108
- - **Risk Levels** :
109
 
110
  - Level 0 (Safe) : No potential for harm.
111
  - Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
@@ -128,7 +125,7 @@ datasets:
128
 
129
  #### Stage 2: Direct Preference Optimization Training
130
 
131
- - **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous case and enhancing its robustness against adversarial scenarios.
132
  - **Dataset Size** : 2,642 hard negative samples.
133
  - **Batch Size** : 2
134
  - **Gradient Accumulation Steps** : 8
@@ -141,28 +138,36 @@ These two-stage training procedures significantly enhance ***ReasoningShield's**
141
 
142
  ## πŸ† 3. Performance Evaluation
143
 
144
- We evaluate ***ReasoningShield*** and baselines on four diverse test sets (AIR-Bench , SALAD-Bench , BeaverTails , Jailbreak-Bench) in **QT Moderation**. <strong>Bold</strong> indicates the best results and <ins>underline</ins> represents the second best ones. The results are averaged over five runs conducted on four datasets, and the performance comparison of some models are reported below:
145
 
146
  <div align="center">
147
 
148
- | **Model** | **Size** | **Accuracy (↑)** | **Precision (↑)** | **Recall (↑)** | **F1 (↑)** |
149
- | :-----------------------: | :--------: | :----------------: | :----------------: | :--------------: | :-----------: |
150
- | Perspective | - | 39.4 | 0.0 | 0.0 | 0.0 |
151
- | OpenAI Moderation | - | 59.2 | 71.4 | 54.0 | 61.5 |
152
- | LlamaGuard-3-1B | 1B | 71.4 | 87.2 | 61.7 | 72.3 |
153
- | LlamaGuard-3-8B | 8B | 74.1 | <ins>93.7</ins> | 61.2 | 74.0 |
154
- | LlamaGuard-4 | 12B | 62.1 | 91.4 | 41.0 | 56.7 |
155
- | Aegis-Permissive | 7B | 59.6 | 67.0 | 64.9 | 66.0 |
156
- | Aegis-Defensive | 7B | 62.9 | 64.6 | 85.4 | 73.5 |
157
- | WildGuard | 7B | 68.1 | **99.4** | 47.4 | 64.2 |
158
- | MD-Judge | 7B | 79.1 | 86.9 | 76.9 | 81.6 |
159
- | Beaver-Dam | 7B | 62.6 | 78.4 | 52.5 | 62.9 |
160
- | **ReasoningShield (Ours)** | 1B | <ins>88.6</ins> | 89.9 | <ins>91.3</ins>| <ins>90.6</ins> |
161
- | **ReasoningShield (Ours)** | 3B | **90.5** | 91.1 | **93.4** | **92.2** |
 
 
 
 
 
 
 
 
162
 
163
  </div>
164
 
165
- Additionally, ***ReasoningShield*** exhibits strong generalization in traditional QA Moderation, even though it is trained on a QT pairs dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
166
 
167
  <div align="center">
168
  <img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">
 
50
  </a>
51
 
52
  <!-- License -->
53
+ <a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank" style="margin: 2px;">
54
+ <img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? " style="display: inline-block; vertical-align: middle;"/>
55
  </a>
56
 
57
  </div>
 
61
 
62
  ## πŸ›‘ 1. Model Overview
63
 
64
+ ***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs). It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety alignment for LRMs.
 
 
65
 
66
  - **Key Features** :
67
+ - **Strong Performance**: It sets a CoT Moderation **SOTA** with over 91% average F1 on open-source LRM traces, outperforming LlamaGuard-4 by 36% and GPT-4o by 16%.
68
 
69
+ - **Robust Generalization** : Despite being trained exclusively on a 7K-sample dataset, it demonstrates strong generalization across varied reasoning paradigms, cross-task scenarios, and unseen data distributions.
70
 
71
+ - **Enhanced Explainability** : It provides stepwise risk analysis, effectively addressing the "black-box" limitation of traditional moderation models.
72
 
73
+ - **Efficient Design** : Built on compact base models, it requires low GPU memory (e.g., 2.3GB for 1B version), enabling cost-effective deployment on resource-constrained devices.
74
 
75
  - **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
76
 
 
85
  </div>
86
 
87
 
88
+ - The model is trained on a high-quality dataset of 7,000 (Query, CoT) pairs. Please refer to the following link for detailed information:
89
  - ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
90
 
91
  - **Risk Categories** :
92
 
93
+ - Violence
94
  - Hate & Toxicity
95
  - Deception & Misinformation
96
+ - Rights Violation
97
+ - Sex
98
+ - Child Abuse
99
+ - CyberSecurity
100
  - Prohibited Items
101
  - Economic Harm
102
  - Political Risks
 
103
  - Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
104
 
105
+ - **Safety Levels** :
106
 
107
  - Level 0 (Safe) : No potential for harm.
108
  - Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
 
125
 
126
  #### Stage 2: Direct Preference Optimization Training
127
 
128
+ - **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous cases and enhancing its robust generalization.
129
  - **Dataset Size** : 2,642 hard negative samples.
130
  - **Batch Size** : 2
131
  - **Gradient Accumulation Steps** : 8
 
138
 
139
  ## πŸ† 3. Performance Evaluation
140
 
141
+ ***ReasoningShiled*** achieves **state-of-the-art** performance on CoT Moderation. **Bold** denotes the best results and <ins>underline</ins> the second best. ***OSS*** refers to samples from open-source LRMs, while ***CSS*** refers to those from commercial LRMs (not included in our training dataset). Moreover, samples from BeaverTails and Jailbreak are also excluded from our training dataset for testing the generalization capability.
142
 
143
  <div align="center">
144
 
145
+ | **Model** | **Size** | **AIR (OSS)** | **AIR (CSS)** | **SALAD (OSS)** | **SALAD (CSS)** | **BeaverTails (OSS)** | **BeaverTails (CSS)** | **Jailbreak (OSS)** | **Jailbreak (CSS)** | **Avg (OSS)** | **Avg (CSS)** |
146
+ | :---------------------: | :------: | :-----------: | :-----------: | :-------------: | :-------------: | :-------------------: | :-------------------: | :-----------------: | :-----------------: | :-----------: | :-----------: |
147
+ | **Moderation API** | | | | | | | | | | | |
148
+ | Perspective | - | 0.0 | 0.0 | 0.0 | 11.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.2 |
149
+ | OpenAI Moderation | - | 45.7 | 13.2 | 61.7 | 66.7 | 64.9 | 29.2 | 70.9 | 41.1 | 60.7 | 44.8 |
150
+ | **Prompted LLM** | | | | | | | | | | | |
151
+ | GPT-4o | - | 70.1 | 47.4 | 75.3 | 75.4 | 79.3 | 60.6 | 82.0 | 68.7 | 76.0 | 65.6 |
152
+ | Qwen-2.5 | 72B | 79.1 | 59.8 | 82.1 | **86.0** | 81.1 | 61.5 | 84.2 | 71.9 | 80.8 | 74.0 |
153
+ | Gemma-3 | 27B | 83.2 | 71.6 | 80.2 | 78.3 | 79.2 | **68.9** | 86.6 | 73.2 | 81.6 | 74.4 |
154
+ | Mistral-3.1 | 24B | 65.0 | 45.3 | 77.5 | 73.4 | 73.7 | 55.1 | 77.3 | 54.1 | 73.0 | 60.7 |
155
+ | **Finetuned LLM** | | | | | | | | | | | |
156
+ | LlamaGuard-1 | 7B | 20.3 | 5.7 | 22.8 | 48.8 | 27.1 | 18.8 | 53.9 | 5.7 | 31.0 | 28.0 |
157
+ | LlamaGuard-2 | 8B | 63.3 | 35.7 | 59.8 | 40.0 | 63.3 | 47.4 | 68.2 | 28.6 | 62.4 | 38.1 |
158
+ | LlamaGuard-3 | 8B | 68.3 | 33.3 | 70.4 | 56.5 | 77.6 | 30.3 | 78.5 | 20.5 | 72.8 | 42.2 |
159
+ | LlamaGuard-4 | 12B | 55.0 | 23.4 | 46.1 | 49.6 | 57.0 | 13.3 | 69.2 | 16.2 | 56.2 | 33.7 |
160
+ | Aegis-Permissive | 7B | 56.3 | 51.0 | 66.5 | 67.4 | 65.8 | 35.3 | 70.7 | 33.3 | 64.3 | 53.9 |
161
+ | Aegis-Defensive | 7B | 71.2 | 56.9 | 76.4 | 67.8 | 73.9 | 27.0 | 75.4 | 53.2 | 73.6 | 54.9 |
162
+ | WildGuard | 7B | 58.8 | 45.7 | 66.7 | 76.3 | 68.3 | 51.3 | 79.6 | 55.3 | 67.6 | 62.1 |
163
+ | MD-Judge | 7B | 71.8 | 44.4 | 83.4 | 83.2 | 81.0 | 50.0 | 86.8 | 56.6 | 80.1 | 66.0 |
164
+ | Beaver-Dam | 7B | 50.0 | 17.6 | 52.6 | 36.6 | 71.1 | 12.7 | 60.2 | 36.0 | 58.2 | 26.5 |
165
+ | **ReasoningShield (Ours)** | 1B | <ins>94.2</ins> | <ins>83.7</ins> | <ins>91.5</ins> | 80.5 | <ins>89.0</ins> | 60.0 | <ins>90.1</ins> | <ins>74.2</ins> | <ins>89.4</ins> | <ins>77.7</ins> |
166
+ | **ReasoningShield (Ours)** | 3B | **94.5** | **86.7** | **94.0** | <ins>84.8</ins> | **90.4** | <ins>64.6</ins> | **92.3** | **76.2** | **91.8** | **81.4** |
167
 
168
  </div>
169
 
170
+ Additionally, ***ReasoningShield*** exhibits strong generalization on traditional Answer Moderation, even though it is trained on a CoT Moderation dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
171
 
172
  <div align="center">
173
  <img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">