Edit model card
ENERGY-DRINK-LOVE/komt_DPOv3
Our Team
- Youjin Chung
- Jingyeom Kim
Model
Base Model
Hardware and Software
- Hardware: A100 * 8 for training our model
- Deepspeed library & Huggingface TRL Trainer
Dataset
- DPO_dataset
- ์์ฒด ์ ์ dpo dataset(AI-hub dataset ํ์ฉ)
- OpenOrca DPO ๋ฑ ์์ด ๋ฐ์ดํฐ์
๋ฒ์ญ(ENERGY-DRINK-LOVE/translate_share_gpt_dedup_llama_SFT_1024, ์์ฒด๋ชจ๋ธ ํ์ฉ)
Training Method
Benchmark
Ko LM Eval Harness
Ko-LLM-Leaderboard
- (240316๊ธฐ์ค 4๋ฑ)
Average |
Ko-ARC |
Ko-HellaSwag |
Ko-MMLU |
Ko-TruthfulQA |
Ko-CommonGen V2 |
61.20 |
57.51 |
70.33 |
53.34 |
68.49 |
56.32 |
- Downloads last month
- 1,207
Finetuned from