File size: 2,949 Bytes
fe578b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93e98bc
410b206
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# Exllama v2 cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2

Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.21">turboderp's ExLlamaV2 v0.0.21</a> for quantization.

<b>The "main" branch only contains the measurement.json, download one of the other branches for the model</b>

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Original model: <a href="https://huggingface.co/cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2">cognitivecomputations/Llama-3-8B-Instruct-abliterated-v2</a><br>
Calibration dataset: <a href="https://huggingface.co/datasets/cosmicvalor/toxic-qna">toxic-qna</a>

## Available sizes

| Branch | Bits | lm_head bits | VRAM (4k) | VRAM (8K) | VRAM (16k) | VRAM (32k) | Description |
| ----- | ---- | ------- | ------ | ------ | ------ | ------ | ------------ |
| [8_0](https://huggingface.co/Apel-sin/llama-3-8B-abliterated-v2-exl2/tree/8_0) | 8.0 | 8.0 | 10.1 GB | 10.5 GB | 11.5 GB | 13.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Apel-sin/llama-3-8B-abliterated-v2-exl2/tree/6_5) | 6.5 | 8.0 | 8.9 GB | 9.3 GB | 10.3 GB | 12.4 GB | Very similar to 8.0, good tradeoff of size vs performance, **recommended**. |

# Model Card for Llama-3-8B-Instruct-abliterated-v2

## Overview
This model card describes the Llama-3-8B-Instruct-abliterated-v2 model, which is an orthogonalized version of the meta-llama/Llama-3-8B-Instruct model, and an improvement upon the previous generation Llama-3-8B-Instruct-abliterated. This variant has had certain weights manipulated to inhibit the model's ability to express refusal.

[Join the Cognitive Computations Discord!](https://discord.gg/cognitivecomputations)

## Details

* The model was trained with more data to better pinpoint the "refusal direction".
* This model is MUCH better at directly and succinctly answering requests without producing even so much as disclaimers.

## Methodology

The methodology used to generate this model is described in the preview paper/blog post: '[Refusal in LLMs is mediated by a single direction](https://www.alignmentforum.org/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction)'

## Quirks and Side Effects
This model may come with interesting quirks, as the methodology is still new and untested. The code used to generate the model is available in the Python notebook [ortho_cookbook.ipynb](https://huggingface.co/failspy/llama-3-70B-Instruct-abliterated/blob/main/ortho_cookbook.ipynb).
Please note that the model may still refuse to answer certain requests, even after the weights have been manipulated to inhibit refusal.

## Availability

## How to Use
This model is available for use in the Transformers library.  
GGUF Quants are available [here](https://huggingface.co/failspy/Llama-3-8B-Instruct-abliterated-v2-GGUF).