kyujinpy's picture
Upload README.md
3f2de4b
metadata
language:
  - ko
datasets:
  - kyujinpy/OpenOrca-KO
  - kyujinpy/KOpen-platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0

🐳KoR-Orca-Platypus-13B🐳

img

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture
KoR-Orca-Platypus-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.

Repo Link
Github Korean-OpenOrca: 🐳KoR-Orca-Platypus-13B🐳

Base Model hyunseoki/ko-en-llama2-13b

Training Dataset
Version of combined dataset: kyujinpy/KOR-OpenOrca-Platypus

I combined OpenOrca-KO and kyujinpy/KOpen-platypus. I use A100 GPU 40GB and COLAB, when trianing.

Model Benchmark

KO-LLM leaderboard

Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
KoR-Orca-Platypus-13B🐳(ours) NaN NaN NaN NaN NaN NaN
GenAI-llama2-ko-en-platypus 49.81 45.22 55.25 41.84 44.78 61.97
KoT-Platypus2-13B 49.55 43.69 53.05 42.29 43.34 65.38
KO-Platypus2-13B 47.90 44.20 54.31 42.47 44.41 54.11
Korean-OpenOrca-13B🐳 47.85 43.09 54.13 40.24 45.22 56.57

Compare with Top 4 SOTA models. (update: 10/14)

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/KoR-Orca-Platypus-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)