Edit model card

Neurotic-Jomainotrik-7b-slerp

Neurotic-Jomainotrik-7b-slerp is a merge of the following models using mergekit:

🧩 Configuration

slices:
  - sources:
      - model: liminerity/merge
        layer_range: [0, 32]
      - model: bardsai/jaskier-7b-dpo-v5.6
        layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 76.40
AI2 Reasoning Challenge (25-Shot) 72.95
HellaSwag (10-Shot) 89.15
MMLU (5-Shot) 64.28
TruthfulQA (0-shot) 77.64
Winogrande (5-shot) 85.40
GSM8k (5-shot) 68.99
Downloads last month
1,503
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Collection including liminerity/Neurotic-Jomainotrik-7b-slerp

Evaluation results