Edit model card

Qwen2-7B-Instruct-abliterated

Introduction

Abliterated version of Qwen2-7B-Instruct using failspy's notebook. The model's strongest refusal directions have been ablated via weight orthogonalization, but the model may still refuse your request, misunderstand your intent, or provide unsolicited advice regarding ethics or safety.

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "natong19/Qwen2-7B-Instruct-abliterated"
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=256
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Evaluation

Evaluation framework: lm-evaluation-harness 0.4.2

Datasets Qwen2-7B-Instruct Qwen2-7B-Instruct-abliterated
ARC (25-shot) 62.5 62.5
GSM8K (5-shot) 73.0 72.2
HellaSwag (10-shot) 81.8 81.7
MMLU (5-shot) 70.7 70.5
TruthfulQA (0-shot) 57.3 55.0
Winogrande (5-shot) 76.2 77.4
Downloads last month
650
Safetensors
Model size
7.62B params
Tensor type
BF16
ยท
Inference API
Input a message to start chatting with natong19/Qwen2-7B-Instruct-abliterated.
Model is too large to load in Inference API (serverless). To try the model, launch it on Inference Endpoints (dedicated) instead.

Space using natong19/Qwen2-7B-Instruct-abliterated 1