merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using bamec66557/MNRP_0.5 as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: bamec66557/MNRP_0.5
- model: Infermatic/MN-12B-Inferor-v0.1
- model: redrix/AngelSlayer-12B-Unslop-Mell-RPMax-DARKNESS
- model: crestf411/MN-Slush
- model: DavidAU/MN-GRAND-Gutenberg-Lyra4-Lyra-12B-DARKNESS
- model: nbeerbower/mistral-nemo-wissenschaft-12B
- model: ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2
merge_method: model_stock
base_model: bamec66557/MNRP_0.5
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
tokenizer_source: union
parameters:
normalize: false
int8_mask: true
epsilon: 0.05
lambda: 1
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric |
Value |
Avg. |
22.57 |
IFEval (0-Shot) |
38.52 |
BBH (3-Shot) |
34.07 |
MATH Lvl 5 (4-Shot) |
12.46 |
GPQA (0-shot) |
9.40 |
MuSR (0-shot) |
11.28 |
MMLU-PRO (5-shot) |
29.69 |