failspy commited on
Commit
4c4c3cc
1 Parent(s): 24d0a3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -11
README.md CHANGED
@@ -1,23 +1,30 @@
1
  ---
2
  library_name: transformers
3
- tags: []
4
  ---
5
- # Llama-3-8B-Instruct-abliterated-v2 Model Card
6
 
7
- This is meta-llama/Llama-3-8B-Instruct with orthogonalized bfloat16 safetensor weights, generated with the methodology that was described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)' which I encourage you to read to understand more.
8
 
9
- TL;DR: this model has had certain weights manipulated to "inhibit" the model's ability to express refusal. It is not in anyway _guaranteed_ that it won't refuse you, understand your request, it may still lecture you about ethics/safety, etc. It is tuned in all other respects the same as the original 8B instruct model was, just with the strongest refusal direction orthogonalized out.
 
10
 
11
- [GGUF Quants Here](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-v2-GGUF)
12
 
13
- ## V2 statement
14
 
15
- This model was trained with more data to better pinpoint the "refusal direction".
 
16
 
17
- This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.
18
 
19
- ## Quirkiness awareness notice
20
 
21
- This model may come with interesting quirks, as I obviously haven't extensively tested it, and the methodology being so new. I encourage you to play with the model, and post any quirks you notice in the community tab, as that'll help us further understand what this orthogonalization has in the way of side effects. The code I used to generate it (and my published 'Kappa-3' model which is just Phi-3 with the same methodology applied) is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
 
 
22
 
23
- If you manage to develop further improvements, please share! This is really the most primitive way to use ablation, but there are other possibilities that I believe are as-yet unexplored.
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: llama3
4
  ---
 
5
 
6
+ # Model Card for Llama-3-8B-Instruct-abliterated-v2
7
 
8
+ ## Overview
9
+ This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.
10
 
11
+ [Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations)
12
 
13
+ ## Details
14
 
15
+ * The model was trained with more data to better pinpoint the "refusal direction".
16
+ * This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.
17
 
18
+ ## Methodology
19
 
20
+ The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)'
21
 
22
+ ## Quirks and Side Effects
23
+ This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
24
+ Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.
25
 
26
+ ## Availability
27
+
28
+ ## How to Use
29
+ This model is available for use in the Transformers library.
30
+ GGUF Quants are available [here](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-v2-GGUF).