Edit model card

merge

This is a testing model using the zeroing method used by elinas/Llama-3-15B-Instruct-zeroed.

If this model pans out in the way I hope, Ill heal it then reupload with a custom model card like the others. currently this is just an experiment.

In case anyone asks AbL3In-15b literally means:

Ab = Abliterated
L3 = Llama-3
In = Instruct
15b = its 15b perameters

GGUF's

GGUF by @Mradermacher

Merge Details

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- sources:
  - layer_range: [8, 24]
    model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8, 24]
    model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [24, 32]
    model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 67.46
AI2 Reasoning Challenge (25-Shot) 61.77
HellaSwag (10-Shot) 78.42
MMLU (5-Shot) 66.57
TruthfulQA (0-shot) 52.53
Winogrande (5-shot) 74.74
GSM8k (5-shot) 70.74
Downloads last month
362
Safetensors
Model size
15B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Space using SteelStorage/AbL3In-15B 1

Evaluation results