Edit model card

QuantFactory/Neural-SOVLish-Devil-8B-L3-GGUF

This is quantized evrsion of saishf/Neural-SOVLish-Devil-8B-L3 created using llama.cpp

Model Description

This is another "SOVL" style merge, this time using mlabonne/NeuralDaredevil-8B-abliterated.

Daredevil is the first abliterated model series i've tried that feels as smart as base llama-3-instruct while also being willing to give instructions to do all kinda of illegal things

Neural daredevil is trained further on the original abliterated model, which should result in a better experience in most scenarios. (A bandaid for the damage abliteration causes)

This model should do well in rp, I'm yet to test it (waiting for gguf files @_@)

Merge Method

This model was merged using the Model Stock merge method using mlabonne/NeuralDaredevil-8B-abliterated as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Aura_Llama3
  - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Smarts_Llama3
  - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/Luna_Llama3
  - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/BlueMoon_Llama3
  - model: mlabonne/NeuralDaredevil-8B-abliterated+ResplendentAI/RP_Format_QuoteAsterisk_Llama3
merge_method: model_stock
base_model: mlabonne/NeuralDaredevil-8B-abliterated
dtype: bfloat16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 72.22
AI2 Reasoning Challenge (25-Shot) 69.11
HellaSwag (10-Shot) 84.77
MMLU (5-Shot) 69.02
TruthfulQA (0-shot) 59.05
Winogrande (5-shot) 78.30
GSM8k (5-shot) 73.09
Downloads last month
167
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/Neural-SOVLish-Devil-8B-L3-GGUF

Quantized
(8)
this model

Evaluation results