Edit model card

Qwen2-1.5B-Instruct-Abliterated-GGUF

Model: Qwen2-1.5B-Instruct-Abliterated
Made by: trollek

Based on original model: Qwen2-1.5B-Instruct
Created by: Qwen

Quantization notes

Made with llama.cpp-b3154 with imatrix file based on Exllamav2 default dataset.
01.09.2024: Added Q4_0_4_4 (low end ARM CPUs), Q4_0_4_8 and Q4_0_8_8 (high end ARM CPUs).
On my PC with i7-3770 CPU these are significantly slower than Q4_K_M.
On my phone Q4_0_4_4 is marginally faster than Q4_K_M.

Original model card

This is an abliterated version of Qwen2-1.5B-Instruct using the same procedure as augmxnt/Qwen2-7B-Instruct-deccp with their code on Github with some added lines from mlabonne/harmful_behaviors to the harmful.txt file.

I have not done anything else to the model. Yet.

Downloads last month
196
GGUF
Model size
1.54B params
Architecture
qwen2

4-bit

5-bit

6-bit

8-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for cgus/Qwen2-1.5B-Instruct-Abliterated-iMat-GGUF

Quantized
(3)
this model