LHK_DPO_v1 / README.md
chtai's picture
GGUF model commit
08f3352 verified
|
raw
history blame
305 Bytes
metadata
license: mit
language:
  - en

LHK_DPO_v1

LHK_DPO_v1 is trained via Direct Preference Optimization(DPO) from TomGrc/FusionNet_7Bx2_MoE_14B.

Details

coming sooon.

Evaluation Results

coming soon.

Contamination Results

coming soon.