--- language: - ko datasets: - kyujinpy/KOR-OpenOrca-Platypus-v3 library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다** **The license is `cc-by-nc-sa-4.0`.** # **🐳KOR-Orca-Platypus-13B🐳** ![img](./Korean-OpenOrca.png) ## Model Details **Model Developers** Kyujin Han (kyujinpy) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. **Repo Link** Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-Korea/Korean-OpenOrca) **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) **Training Dataset** I use [kyujinpy/KOR-OpenOrca-Platypus-v3(private! wait!)](https://huggingface.co/datasets/kyujinpy/KOR-OpenOrca-Platypus-v3). I use A100 GPU 40GB and COLAB, when trianing. # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | [KOR-Orca-Platypus-13B🐳] | 46.59 | 42.06 | 53.95 | 42.28 | 43.55 | 51.12 | | KOR-Orca-Platypus-13B🐳-v2 | 49.48 | 44.03 | 54.43 | 42.23 | 41.64 | 65.05 | > Compare with Top 4 SOTA models. (update: 10/09) # Implementation Code ```python ### KO-Platypus from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "kyujinpy/KOR-Orca-Platypus-13B-v2" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ---