LlaMA 2 13B instruction finetuned

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 47.84
ARC (25-shot) 59.39
HellaSwag (10-shot) 83.88
MMLU (5-shot) 55.57
TruthfulQA (0-shot) 46.89
Winogrande (5-shot) 74.03
GSM8K (5-shot) 8.04
DROP (3-shot) 7.06
Downloads last month
1,298
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Spaces using Secbone/llama-2-13B-instructed 6