Edit model card

Flora DPO

image/jpeg

Finetuned with this DPO dataset: https://huggingface.co/datasets/mlabonne/chatml_dpo_pairs

Quants available here:

https://huggingface.co/solidrust/Flora-7B-DPO-AWQ

https://huggingface.co/Test157t/ResplendentAI-Flora_DPO_7B-5bpw-exl2

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 74.26
AI2 Reasoning Challenge (25-Shot) 71.76
HellaSwag (10-Shot) 88.28
MMLU (5-Shot) 64.13
TruthfulQA (0-shot) 71.08
Winogrande (5-shot) 84.53
GSM8k (5-shot) 65.81
Downloads last month
2,512
Safetensors
Model size
7.24B params
Tensor type
FP16
·

Datasets used to train ResplendentAI/Flora_DPO_7B

Collection including ResplendentAI/Flora_DPO_7B

Evaluation results