QuantFactory/Llama-3.2-3B-Instruct-abliterated-GGUF
This is quantized version of huihui-ai/Llama-3.2-3B-Instruct-abliterated created using llama.cpp
Original Model Card
π¦ Llama-3.2-3B-Instruct-abliterated
This is an uncensored version of Llama 3.2 3B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
Evaluations
The following data has been re-evaluated and calculated as the average for each test.
Benchmark | Llama-3.2-3B-Instruct | Llama-3.2-3B-Instruct-abliterated |
---|---|---|
IF_Eval | 76.55 | 76.76 |
MMLU Pro | 27.88 | 28.00 |
TruthfulQA | 50.55 | 50.73 |
BBH | 41.81 | 41.86 |
GPQA | 28.39 | 28.41 |
The script used for evaluation can be found inside this repository under /eval.sh, or click here
- Downloads last month
- 560
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for QuantFactory/Llama-3.2-3B-Instruct-abliterated-GGUF
Base model
meta-llama/Llama-3.2-3B-Instruct