UltraMerge-7B / README.md
mlabonne's picture
Update README.md
7a35b35 verified
|
raw
history blame
490 Bytes
metadata
library_name: transformers
license: cc-by-nc-4.0

UltraMerge-7B

This model is an experimental DPO fine-tune of automerger/YamShadow-7B on the following datasets:

  • mlabonne/truthy-dpo-v0.1
  • mlabonne/distilabel-intel-orca-dpo-pairs
  • mlabonne/chatml-OpenHermes2.5-dpo-binarized-alpha
  • mlabonne/ultrafeedback-binarized-preferences-cleaned

I have no idea about what's the best chat template. Probably Mistral-Instruct or ChatML.