UltraMerge-7B
This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:
- mlabonne/truthy-dpo-v0.1
- mlabonne/distilabel-intel-orca-dpo-pairs
- mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
- mlabonne/ultrafeedback-binarized-preferences-cleaned
I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML.
- Downloads last month
- 86
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.