Edit model card

image/jpeg

A DPO fine tuned mhm-7b-v1.3 on Intel/orca_dpo_pairs

Based upon mistral. Created using dare_ties and models from openllm leaderboard. Over 3 merges involving 7 different models, this was the result.

Just an experiment.

Downloads last month
733
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for h2m/mhm-7b-v1.3-DPO-1

Quantizations
1 model

Spaces using h2m/mhm-7b-v1.3-DPO-1 6