A DPO fine tuned mhm-7b-v1.3 on Intel/orca_dpo_pairs
Based upon mistral. Created using dare_ties and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.
Just an experiment.
- Downloads last month
- 1,294
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.