LHK_DPO_v1 / README.md
chtai's picture
Update README.md
5504867 verified
|
raw
history blame
493 Bytes
---
base_model: TomGrc/FusionNet_7Bx2_MoE_14B
model_creator: HanNayeoniee
model_name: LHK_DPO_v1
license: mit
language:
- en
---
# LHK_DPO_v1
- Original model: [HanNayeoniee/LHK_DPO_v1](https://huggingface.co/HanNayeoniee/LHK_DPO_v1)
LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from [TomGrc/FusionNet_7Bx2_MoE_14B](https://huggingface.co/TomGrc/FusionNet_7Bx2_MoE_14B).
## Details
coming sooon.
## Evaluation Results
coming soon.
## Contamination Results
coming soon.