--- license: apache-2.0 datasets: - argilla/distilabel-intel-orca-dpo-pairs library_name: transformers pipeline_tag: text-generation ---

🏠 Socials

🤗 HF Repo • 🐦 Twitter

# Evangelion-7B I was just curious to see if something special might happen if one uses: $$ \text{{high-quality DPO dataset}} + \text{{merge of DPO optimized and non-DPO optimized model}} $$ The underlying model that I used was `/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp`. # Dataset Dataset: `/argilla/distilabel-intel-orca-dpo-pairs` The dataset was roughly ~3000 samples but they were high quality (according to the chosen_score). The following filters were applied to the original dataset: ```python dataset = dataset.filter( lambda r: r["status"] != "tie" and r["chosen_score"] >= 8 and not r["in_gsm8k_train"] ) ``` # Chat Template I decided to go with the ChatML which is used for OpenHermes2.5 By the way I integreated the chat template into the models tokenizer. ``` <|im_start|>system {system}<|im_end|> <|im_start|>user {user}<|im_end|> <|im_start|>assistant {asistant}<|im_end|> ```