|
--- |
|
language: |
|
- ko |
|
- en |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
This model has been developed by KAIST ALIN Lab and OMNIOUS.AI - HyunseokLee, TaeyoungKim |
|
|
|
**Input** |
|
Models input text only. |
|
|
|
**Output** |
|
Models generate text only. |
|
|
|
**Model Architecture** |
|
ko-en-llama2-13b-aligned is an auto-regressive language model based on the LLaMA2 transformer architecture. |
|
|
|
**Base Model** |
|
hyunseoki/ko-en-llama2-13b |
|
|
|
**Training Dataset** |
|
Open dataset wiki and AIhub (English + Korean). |
|
Supervised Finetuned with Instruction Dataset and aligned with Human Preference Dataset using DPO. |