This model uses Alpaca

This is a lora finetune of mistralai/Mistral-Small-24B-Base-2501 using anthracite-org/kalo-opus-instruct-22k-no-refusal for ~6M tokens

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.35
IFEval (0-Shot) 62.44
BBH (3-Shot) 33.02
MATH Lvl 5 (4-Shot) 18.05
GPQA (0-shot) 14.54
MuSR (0-shot) 12.09
MMLU-PRO (5-shot) 29.94
Downloads last month
19
Safetensors
Model size
23.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for SaisExperiments/Not-So-Small-Alpaca-24B

Finetuned
(23)
this model
Quantizations
1 model

Dataset used to train SaisExperiments/Not-So-Small-Alpaca-24B

Evaluation results