Edit model card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.67
AI2 Reasoning Challenge (25-Shot) 71.76
HellaSwag (10-Shot) 88.16
MMLU (5-Shot) 64.94
TruthfulQA (0-shot) 73.18
Winogrande (5-shot) 82.87
GSM8k (5-shot) 73.09
Downloads last month
2,532
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Merge of

Evaluation results