Edit model card

Qwen2-7B-Instruct-abliterated-GGUF

Model: Qwen2-7B-Instruct-abliterated
Made by: natong19

Based on original model: Qwen2-7B-Instruct
Created by: Qwen

Quantization notes

Made with llama.cpp-b3154 with imatrix file based on Exllamav2 calibration file.
05.10.2024 Added quants for ARM devices Q4_0_4_4 (low end), Q4_0_4_8, Q4_0_8_8 (high end).

Original model card

Qwen2-7B-Instruct-abliterated

Introduction

Abliterated version of Qwen2-7B-Instruct using failspy's notebook. The model's strongest refusal directions have been ablated via weight orthogonalization, but the model may still refuse your request, misunderstand your intent, or provide unsolicited advice regarding ethics or safety.

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "natong19/Qwen2-7B-Instruct-abliterated"
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_id)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=256
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

Evaluation

Evaluation framework: lm-evaluation-harness 0.4.2

Datasets Qwen2-7B-Instruct Qwen2-7B-Instruct-abliterated
ARC (25-shot) 62.5 62.5
GSM8K (5-shot) 73.0 72.2
HellaSwag (10-shot) 81.8 81.7
MMLU (5-shot) 70.7 70.5
TruthfulQA (0-shot) 57.3 55.0
Winogrande (5-shot) 76.2 77.4
Downloads last month
77
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for cgus/Qwen2-7B-Instruct-abliterated-iMat-GGUF

Quantized
(7)
this model