Edit model card

This model has been developed by KAIST ALIN Lab and OMNIOUS.AI - HyunseokLee, TaeyoungKim

Input Models input text only.

Output Models generate text only.

Model Architecture
ko-en-llama2-13b-aligned is an auto-regressive language model based on the LLaMA2 transformer architecture.

Base Model
hyunseoki/ko-en-llama2-13b

Training Dataset
Open dataset wiki and AIhub (English + Korean). Supervised Finetuned with Instruction Dataset and aligned with Human Preference Dataset using DPO.

Downloads last month
110
Safetensors
Model size
13B params
Tensor type
F32
·