Text Generation
Transformers
Safetensors
6 languages
llama
Eval Results
Inference Endpoints
text-generation-inference
Edit model card

The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, LDJnr/Capybara, LDJnr/Pure-Dove, LDJnr/LessWrong-Amplify-Instruct, LDJnr/Verified-Camel, mlabonne/guanaco-llama2-1k, and OpenAssistant/oasst_top1_2023-08-25. This model achieved a test loss of 0.39 on LDJnr/Verified-Camel.

Make sure to comply with the microsoft research license. Please read it before using this model.

This model was trained on the ChatML prompt template.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 56.15
AI2 Reasoning Challenge (25-Shot) 60.41
HellaSwag (10-Shot) 80.46
MMLU (5-Shot) 59.51
TruthfulQA (0-shot) 54.01
Winogrande (5-shot) 77.43
GSM8k (5-shot) 5.08
Downloads last month
5,958
Safetensors
Model size
13B params
Tensor type
BF16
·

Finetuned from

Datasets used to train Locutusque/Orca-2-13b-SFT-v6

Evaluation results