Llama3.1-8B-drill
- This is a merge of some highest scoring Llama 3.1 8B models on the IFEval task on the Open LLM Leaderboard.
- The purpose is to create a model that can follow instructions well.
- It's not meant to be a particularly smart, stylish, or uncensored model.
Update: This turned out to be a mediocre model compared to its parents at even the IFEval task. Consider those models instead. ๐
Merge Details
Merge Method
This model was merged using the Model Stock merge method using Meta-Llama-3.1-8B-Instruct as a base using mergekit.
Models Merged
The following models were included in the merge:
- Dampfinchen/Llama-3.1-8B-Ultra-Instruct
- vicgalle/Configurable-Llama-3.1-8B-Instruct
- allenai/Llama-3.1-Tulu-3-8B
- akjindal53244/Llama-3.1-Storm-8B
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Configurable-Llama-3.1-8B-Instruct
- model: Llama-3.1-Tulu-3-8B
- model: Llama-3.1-8B-Ultra-Instruct
- model: Llama-3.1-Storm-8B
merge_method: model_stock
base_model: Meta-Llama-3.1-8B-Instruct
parameters:
normalize: true
weight: 1.0
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 26.62 |
IFEval (0-Shot) | 76.52 |
BBH (3-Shot) | 28.79 |
MATH Lvl 5 (4-Shot) | 16.54 |
GPQA (0-shot) | 2.35 |
MuSR (0-shot) | 4.70 |
MMLU-PRO (5-shot) | 30.84 |
- Downloads last month
- 5
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard76.520
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard28.790
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard16.540
- acc_norm on GPQA (0-shot)Open LLM Leaderboard2.350
- acc_norm on MuSR (0-shot)Open LLM Leaderboard4.700
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard30.840