Edit model card

LHK_DPO_v1

LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from TomGrc/FusionNet_7Bx2_MoE_14B.

Details

coming sooon.

Evaluation Results

coming soon.

Contamination Results

coming soon.

Downloads last month
1,205
Safetensors
Model size
12.9B params
Tensor type
BF16
·

Space using HanNayeoniee/LHK_DPO_v1 1