Update README.md
Browse files
README.md
CHANGED
|
@@ -50,8 +50,8 @@ datasets:
|
|
| 50 |
</a>
|
| 51 |
|
| 52 |
<!-- License -->
|
| 53 |
-
<a href="https://www.apache.org/licenses/LICENSE-2.0
|
| 54 |
-
<img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? "
|
| 55 |
</a>
|
| 56 |
|
| 57 |
</div>
|
|
@@ -61,18 +61,16 @@ datasets:
|
|
| 61 |
|
| 62 |
## π‘ 1. Model Overview
|
| 63 |
|
| 64 |
-
***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs)
|
| 65 |
-
|
| 66 |
-
- **Primary Use Case** : Detecting and mitigating hidden risks in reasoning traces of Large Reasoning Models (LRMs)
|
| 67 |
|
| 68 |
- **Key Features** :
|
| 69 |
-
- **
|
| 70 |
|
| 71 |
-
- **
|
| 72 |
|
| 73 |
-
- **
|
| 74 |
|
| 75 |
-
- **Efficient Design** : Built on compact
|
| 76 |
|
| 77 |
- **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
|
| 78 |
|
|
@@ -87,25 +85,24 @@ datasets:
|
|
| 87 |
</div>
|
| 88 |
|
| 89 |
|
| 90 |
-
- The model is trained on a high-quality dataset of 7,000
|
| 91 |
- ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
|
| 92 |
|
| 93 |
- **Risk Categories** :
|
| 94 |
|
| 95 |
-
- Violence
|
| 96 |
- Hate & Toxicity
|
| 97 |
- Deception & Misinformation
|
| 98 |
-
- Rights
|
| 99 |
-
-
|
| 100 |
-
- Child
|
| 101 |
-
-
|
| 102 |
- Prohibited Items
|
| 103 |
- Economic Harm
|
| 104 |
- Political Risks
|
| 105 |
-
- Safe
|
| 106 |
- Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
|
| 107 |
|
| 108 |
-
- **
|
| 109 |
|
| 110 |
- Level 0 (Safe) : No potential for harm.
|
| 111 |
- Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
|
|
@@ -128,7 +125,7 @@ datasets:
|
|
| 128 |
|
| 129 |
#### Stage 2: Direct Preference Optimization Training
|
| 130 |
|
| 131 |
-
- **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous
|
| 132 |
- **Dataset Size** : 2,642 hard negative samples.
|
| 133 |
- **Batch Size** : 2
|
| 134 |
- **Gradient Accumulation Steps** : 8
|
|
@@ -141,28 +138,36 @@ These two-stage training procedures significantly enhance ***ReasoningShield's**
|
|
| 141 |
|
| 142 |
## π 3. Performance Evaluation
|
| 143 |
|
| 144 |
-
|
| 145 |
|
| 146 |
<div align="center">
|
| 147 |
|
| 148 |
-
| **Model** | **Size** | **
|
| 149 |
-
|
|
| 150 |
-
|
|
| 151 |
-
|
|
| 152 |
-
|
|
| 153 |
-
|
|
| 154 |
-
|
|
| 155 |
-
|
|
| 156 |
-
|
|
| 157 |
-
|
|
| 158 |
-
|
|
| 159 |
-
|
|
| 160 |
-
|
|
| 161 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 162 |
|
| 163 |
</div>
|
| 164 |
|
| 165 |
-
Additionally, ***ReasoningShield*** exhibits strong generalization
|
| 166 |
|
| 167 |
<div align="center">
|
| 168 |
<img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">
|
|
|
|
| 50 |
</a>
|
| 51 |
|
| 52 |
<!-- License -->
|
| 53 |
+
<a href="https://www.apache.org/licenses/LICENSE-2.0" target="_blank" style="margin: 2px;">
|
| 54 |
+
<img alt="Model License" src="https://img.shields.io/badge/Model%20License-Apache_2.0-green.svg? " style="display: inline-block; vertical-align: middle;"/>
|
| 55 |
</a>
|
| 56 |
|
| 57 |
</div>
|
|
|
|
| 61 |
|
| 62 |
## π‘ 1. Model Overview
|
| 63 |
|
| 64 |
+
***ReasoningShield*** is the first specialized safety moderation model tailored to identify hidden risks in intermediate reasoning steps in Large Reasoning Models (LRMs). It excels in detecting harmful content that may be concealed within seemingly harmless reasoning traces, ensuring robust safety alignment for LRMs.
|
|
|
|
|
|
|
| 65 |
|
| 66 |
- **Key Features** :
|
| 67 |
+
- **Strong Performance**: It sets a CoT Moderation **SOTA** with over 91% average F1 on open-source LRM traces, outperforming LlamaGuard-4 by 36% and GPT-4o by 16%.
|
| 68 |
|
| 69 |
+
- **Robust Generalization** : Despite being trained exclusively on a 7K-sample dataset, it demonstrates strong generalization across varied reasoning paradigms, cross-task scenarios, and unseen data distributions.
|
| 70 |
|
| 71 |
+
- **Enhanced Explainability** : It provides stepwise risk analysis, effectively addressing the "black-box" limitation of traditional moderation models.
|
| 72 |
|
| 73 |
+
- **Efficient Design** : Built on compact base models, it requires low GPU memory (e.g., 2.3GB for 1B version), enabling cost-effective deployment on resource-constrained devices.
|
| 74 |
|
| 75 |
- **Base Model**: https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct & https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct
|
| 76 |
|
|
|
|
| 85 |
</div>
|
| 86 |
|
| 87 |
|
| 88 |
+
- The model is trained on a high-quality dataset of 7,000 (Query, CoT) pairs. Please refer to the following link for detailed information:
|
| 89 |
- ***ReasoningShield-Dataset:*** https://huggingface.co/datasets/ReasoningShield/ReasoningShield-Dataset
|
| 90 |
|
| 91 |
- **Risk Categories** :
|
| 92 |
|
| 93 |
+
- Violence
|
| 94 |
- Hate & Toxicity
|
| 95 |
- Deception & Misinformation
|
| 96 |
+
- Rights Violation
|
| 97 |
+
- Sex
|
| 98 |
+
- Child Abuse
|
| 99 |
+
- CyberSecurity
|
| 100 |
- Prohibited Items
|
| 101 |
- Economic Harm
|
| 102 |
- Political Risks
|
|
|
|
| 103 |
- Additionally, to enhance generalization to OOD scenarios, we introduce an **Other Risks** category in the prompt.
|
| 104 |
|
| 105 |
+
- **Safety Levels** :
|
| 106 |
|
| 107 |
- Level 0 (Safe) : No potential for harm.
|
| 108 |
- Level 0.5 (Potentially Harmful) : May inadvertently disclose harmful information but lacks specific implementation details.
|
|
|
|
| 125 |
|
| 126 |
#### Stage 2: Direct Preference Optimization Training
|
| 127 |
|
| 128 |
+
- **Objective** : Refining the model's performance on hard negative samples constructed from the ambiguous cases and enhancing its robust generalization.
|
| 129 |
- **Dataset Size** : 2,642 hard negative samples.
|
| 130 |
- **Batch Size** : 2
|
| 131 |
- **Gradient Accumulation Steps** : 8
|
|
|
|
| 138 |
|
| 139 |
## π 3. Performance Evaluation
|
| 140 |
|
| 141 |
+
***ReasoningShiled*** achieves **state-of-the-art** performance on CoT Moderation. **Bold** denotes the best results and <ins>underline</ins> the second best. ***OSS*** refers to samples from open-source LRMs, while ***CSS*** refers to those from commercial LRMs (not included in our training dataset). Moreover, samples from BeaverTails and Jailbreak are also excluded from our training dataset for testing the generalization capability.
|
| 142 |
|
| 143 |
<div align="center">
|
| 144 |
|
| 145 |
+
| **Model** | **Size** | **AIR (OSS)** | **AIR (CSS)** | **SALAD (OSS)** | **SALAD (CSS)** | **BeaverTails (OSS)** | **BeaverTails (CSS)** | **Jailbreak (OSS)** | **Jailbreak (CSS)** | **Avg (OSS)** | **Avg (CSS)** |
|
| 146 |
+
| :---------------------: | :------: | :-----------: | :-----------: | :-------------: | :-------------: | :-------------------: | :-------------------: | :-----------------: | :-----------------: | :-----------: | :-----------: |
|
| 147 |
+
| **Moderation API** | | | | | | | | | | | |
|
| 148 |
+
| Perspective | - | 0.0 | 0.0 | 0.0 | 11.9 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 5.2 |
|
| 149 |
+
| OpenAI Moderation | - | 45.7 | 13.2 | 61.7 | 66.7 | 64.9 | 29.2 | 70.9 | 41.1 | 60.7 | 44.8 |
|
| 150 |
+
| **Prompted LLM** | | | | | | | | | | | |
|
| 151 |
+
| GPT-4o | - | 70.1 | 47.4 | 75.3 | 75.4 | 79.3 | 60.6 | 82.0 | 68.7 | 76.0 | 65.6 |
|
| 152 |
+
| Qwen-2.5 | 72B | 79.1 | 59.8 | 82.1 | **86.0** | 81.1 | 61.5 | 84.2 | 71.9 | 80.8 | 74.0 |
|
| 153 |
+
| Gemma-3 | 27B | 83.2 | 71.6 | 80.2 | 78.3 | 79.2 | **68.9** | 86.6 | 73.2 | 81.6 | 74.4 |
|
| 154 |
+
| Mistral-3.1 | 24B | 65.0 | 45.3 | 77.5 | 73.4 | 73.7 | 55.1 | 77.3 | 54.1 | 73.0 | 60.7 |
|
| 155 |
+
| **Finetuned LLM** | | | | | | | | | | | |
|
| 156 |
+
| LlamaGuard-1 | 7B | 20.3 | 5.7 | 22.8 | 48.8 | 27.1 | 18.8 | 53.9 | 5.7 | 31.0 | 28.0 |
|
| 157 |
+
| LlamaGuard-2 | 8B | 63.3 | 35.7 | 59.8 | 40.0 | 63.3 | 47.4 | 68.2 | 28.6 | 62.4 | 38.1 |
|
| 158 |
+
| LlamaGuard-3 | 8B | 68.3 | 33.3 | 70.4 | 56.5 | 77.6 | 30.3 | 78.5 | 20.5 | 72.8 | 42.2 |
|
| 159 |
+
| LlamaGuard-4 | 12B | 55.0 | 23.4 | 46.1 | 49.6 | 57.0 | 13.3 | 69.2 | 16.2 | 56.2 | 33.7 |
|
| 160 |
+
| Aegis-Permissive | 7B | 56.3 | 51.0 | 66.5 | 67.4 | 65.8 | 35.3 | 70.7 | 33.3 | 64.3 | 53.9 |
|
| 161 |
+
| Aegis-Defensive | 7B | 71.2 | 56.9 | 76.4 | 67.8 | 73.9 | 27.0 | 75.4 | 53.2 | 73.6 | 54.9 |
|
| 162 |
+
| WildGuard | 7B | 58.8 | 45.7 | 66.7 | 76.3 | 68.3 | 51.3 | 79.6 | 55.3 | 67.6 | 62.1 |
|
| 163 |
+
| MD-Judge | 7B | 71.8 | 44.4 | 83.4 | 83.2 | 81.0 | 50.0 | 86.8 | 56.6 | 80.1 | 66.0 |
|
| 164 |
+
| Beaver-Dam | 7B | 50.0 | 17.6 | 52.6 | 36.6 | 71.1 | 12.7 | 60.2 | 36.0 | 58.2 | 26.5 |
|
| 165 |
+
| **ReasoningShield (Ours)** | 1B | <ins>94.2</ins> | <ins>83.7</ins> | <ins>91.5</ins> | 80.5 | <ins>89.0</ins> | 60.0 | <ins>90.1</ins> | <ins>74.2</ins> | <ins>89.4</ins> | <ins>77.7</ins> |
|
| 166 |
+
| **ReasoningShield (Ours)** | 3B | **94.5** | **86.7** | **94.0** | <ins>84.8</ins> | **90.4** | <ins>64.6</ins> | **92.3** | **76.2** | **91.8** | **81.4** |
|
| 167 |
|
| 168 |
</div>
|
| 169 |
|
| 170 |
+
Additionally, ***ReasoningShield*** exhibits strong generalization on traditional Answer Moderation, even though it is trained on a CoT Moderation dataset of just 7K samples. Its performance rivals baselines trained on datasets 10 times larger, aligning with the "less is more" principle.
|
| 171 |
|
| 172 |
<div align="center">
|
| 173 |
<img src="images/bar.png" alt="QT and QA Performance" style="width: 100%; height: auto;">
|