haiku
Collection
🌸 This is a collection of synthetic datasets built to help improve the ability of open language models to better write haikus through the use of DPO
•
3 items
•
Updated
•
4
This is a very early model which uses the davanstrien/haiku_dpo dataset to train teknium/OpenHermes-2.5-Mistral-7B using Direct Preference Optimization.
The eventual goal of this model is for it to write "technically correct" haiku.