merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: lemon07r/Gemma-2-Ataraxy-v4d-9B
- model: zelk12/gemma-2-9B-MT1DMv1t0.25
merge_method: slerp
base_model: lemon07r/Gemma-2-Ataraxy-v4d-9B
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 34.04 |
IFEval (0-Shot) | 73.53 |
BBH (3-Shot) | 43.54 |
MATH Lvl 5 (4-Shot) | 19.79 |
GPQA (0-shot) | 13.65 |
MuSR (0-shot) | 16.76 |
MMLU-PRO (5-shot) | 36.98 |
- Downloads last month
- 11
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for allknowingroger/GemmaSlerp5-10B
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard73.530
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard43.540
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard19.790
- acc_norm on GPQA (0-shot)Open LLM Leaderboard13.650
- acc_norm on MuSR (0-shot)Open LLM Leaderboard16.760
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard36.980