cloudyu's picture
Update README.md with license information (#2)
8f41adc verified
|
raw
history blame
No virus
497 Bytes
metadata
tags:
  - yi
  - moe
license: apache-2.0

this is another DPO all-linear-parameter-fine-tuned MoE model for TomGrc/FusionNet_34Bx2_MoE_v0.1

it's trained on a H100 for one hour

DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. 

Metrics not test!