Llama
Collection
3 items
β’
Updated
β’
2
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration (see this article to know more about it).
Special thanks to @FailSpy for the original code and technique. Please follow him if you're interested in abliterated models.
The following data has been re-evaluated and calculated as the average for each test.
Benchmark | Llama-3.1-8b-Instruct | Meta-Llama-3.1-8B-Instruct-abliterated |
---|---|---|
IF_Eval | 80.0 | 78.98 |
MMLU Pro | 36.34 | 35.91 |
TruthfulQA | 52.98 | 55.42 |
BBH | 48.72 | 47.0 |
GPQA | 33.55 | 33.93 |
The script used for evaluation can be found inside this repository under /eval.sh, or click here
Base model
meta-llama/Llama-3.1-8B