Edit model card

DPO finetune of (https://huggingface.co/abacusai/MM-Orc-Vic-bagel-34b-c1000) on the Bagel DPO dataset

Evaluation Results

Average ARC HellaSwag MMLU TruthfulQA Winogrande GSM8K
Downloads last month
2,474
Safetensors
Model size
34.4B params
Tensor type
FP16
·

Dataset used to train abacusai/MM-OV-bagel-DPO-34b-c1000-250