Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: llama3
|
4 |
+
---
|
5 |
+
|
6 |
+
# Model Card for Llama-3-8B-Instruct-abliterated-v2
|
7 |
+
|
8 |
+
## Overview
|
9 |
+
This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.
|
10 |
+
|
11 |
+
[Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations)
|
12 |
+
|
13 |
+
## Details
|
14 |
+
|
15 |
+
* The model was trained with more data to better pinpoint the "refusal direction".
|
16 |
+
* This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.
|
17 |
+
|
18 |
+
## Methodology
|
19 |
+
|
20 |
+
The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)'
|
21 |
+
|
22 |
+
## Quirks and Side Effects
|
23 |
+
This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
|
24 |
+
Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.
|