State of the art for size on Open LLM Leaderboard on acc_norm score
SOTA at size level as of acc_norm score on 9/30/2024, viewable at open-llm-leaderboard/Solshine__Llama-3-1-big-thoughtful-passthrough-merge-2-details
acc_norm of 31.5% according to open llm leaderboard test result dataset.
Due to the merged and minimally retrained nature of this model, this score may not reflect in human evaluated general performance in some domains.
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- Solshine/reflection-llama-3.1-8B
- Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- mlabonne/Hermes-3-Llama-3.1-8B-lorablated
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- layer_range: [0, 16]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
- sources:
- layer_range: [4, 20]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [8, 24]
model: Solshine/Meta-Llama-3.1-8B-Instruct-Python-Coder
- sources:
- layer_range: [12, 28]
model: Solshine/reflection-llama-3.1-8B
- sources:
- layer_range: [16, 32]
model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated
merge_method: passthrough
dtype: float16
- Downloads last month
- 19
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Solshine/Llama-3-1-big-thoughtful-passthrough-merge-2
Merge model
this model
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard25.470
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard5.010
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard0.150
- acc_norm on GPQA (0-shot)Open LLM Leaderboard1.230
- acc_norm on MuSR (0-shot)Open LLM Leaderboard6.750
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard2.060