File size: 1,448 Bytes
d6e7a50 |
1 2 3 4 5 6 7 8 9 10 11 12 13 |
A few different attempts at orthogonalization/abliteration of llama-3.1-8b-instruct using variations of the method lied out in "Mechanistically Eliciting Latent Behaviors in Language Models". <br/>
v1 & v2 were destined for the bit bucket <br/>
<br/>
Each of these use different vectors and have some variations in where the new refusal boundaries lie. None of them seem totally jailbroken.
Advantage: only need to project down_proj for one layer, so there is usually very little brain damage. <br/>
Disadvantage: using the difference of means method is precisely targetted, while this method requires filtering for interesting control vectors from a selection of prompts
[https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v3](https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v3) <br/>
[https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v4](https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v4) <br/>
[https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v5](https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v5) <br/>
[https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v6](https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v6) <br/>
[https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v7](https://huggingface.co/lodrick-the-lafted/llama-3.1-8b-instruct-ortho-v7) <br/> |