dpo_model_test1 / README.md
kimdeokgi's picture
dpo model upload
94deab1
---
license: apache-2.0
language:
- en
---
# kimdeokgi/dpo_model_test1
# **Introduction**
This model is test version, alignment-tuned model.
We utilize state-of-the-art instruction fine-tuning methods including direct preference optimization (DPO).
After DPO training, we linearly merged models to boost performance.