this is another DPO all-linear-parameter-fine-tuned MoE model for TomGrc/FusionNet_34Bx2_MoE_v0.1
it's trained on a H100 for one hour
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023.
Metrics not test!
- Downloads last month