File size: 2,568 Bytes
6e1ab76
 
 
 
 
 
 
 
 
 
 
 
9d2dca2
8334178
f7755ab
6e1ab76
 
 
 
 
f7755ab
9ac1f84
 
 
f7755ab
6e1ab76
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
---
library_name: transformers
tags: []
---
# Llama-3-70B-Instruct-abliterated Model Card

This is meta-llama/Llama-3-70B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.

TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 70B instruct model was, just with the strongest refusal direction orthogonalized out.

## GGUF quants
Uploaded quants:

fp16 (in main) - good for converting to other platforms or getting the quantization you actually want, not recommended but obviously highest quality

q8_0 (in own branch) - if you've got the spare capacity, might as well?

q6_0 (in own branch) - this will probably be the best balance in terms of quality/performance

q4 (in main) - recommended for ~48GB VRAM setups

q3_k_m (in main) - decent quality, would prefer q4 or q3_k_s

q3_k_s (in main) - perfect fit for ~32GB VRAM setups

q2 (in main) - surprisingly decent quality

## For the people who like tinkering or looking to save bandwidth
In the repo, I've included `refusal_dir.pth`
If you have Llama-3-70B-Instruct model downloaded already, you can use the ortho cookbook to apply it to your downloaded model, which will make it the same as what you'd download from here.

## Quirkiness awareness notice

This model may come with interesting quirks, as I obviously haven't extensively tested it, and the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. The code I used to generate it (and my published 'Kappa-3' model which is just Phi-3 with the same methodology applied) is available in a Python notebook in this repo. Specifically, the [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb). 

If you manage to develop further improvements, please share! This is really the most primitive way to use ablation, but there are other possibilities that I believe are as-yet unexplored.