Edit model card

image/jpeg

A DPO fine tuned mhm-7b-v1.3 on Intel/orca_dpo_pairs

Based upon mistral. Created using dare_ties and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.

Just an experiment.

Downloads last month
3,295
Safetensors
Model size
7.24B params
Tensor type
FP16
·