This model uses Alpaca
This is a lora finetune of mistralai/Mistral-Small-24B-Base-2501 using anthracite-org/kalo-opus-instruct-22k-no-refusal for ~6M tokens
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 28.35 |
IFEval (0-Shot) | 62.44 |
BBH (3-Shot) | 33.02 |
MATH Lvl 5 (4-Shot) | 18.05 |
GPQA (0-shot) | 14.54 |
MuSR (0-shot) | 12.09 |
MMLU-PRO (5-shot) | 29.94 |
- Downloads last month
- 19
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for SaisExperiments/Not-So-Small-Alpaca-24B
Dataset used to train SaisExperiments/Not-So-Small-Alpaca-24B
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard62.440
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard33.020
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard18.050
- acc_norm on GPQA (0-shot)Open LLM Leaderboard14.540
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.090
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard29.940