Edit model card

(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다
The license is cc-by-nc-sa-4.0.

🐳Korean-OpenOrca-13B-v2🐳

img

Model Details

Model Developers Kyujin Han (kyujinpy)

Model Architecture
Korean-OpenOrca-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.

Repo Link
Github Korean-OpenOrca: 🐳Korean-OpenOrca🐳

Base Model hyunseoki/ko-en-llama2-13b

Training Dataset
I use OpenOrca-ko-v3.
Using DeepL, translate about OpenOrca.

I use A100 GPU 40GB and COLAB, when trianing.

Model comparisons

Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
[Korean-OpenOrca-13B🐳] 48.79 43.09 54.13 40.24 45.22 61.28
[Korean-OpenOrca-13B-v2🐳] 48.17 43.17 54.51 42.90 41.82 58.44
Korean-OpenOrca-13B-v3🐳 48.86 43.77 54.30 41.79 43.85 60.57

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/Korean-OpenOrca-13B-v3"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)

Downloads last month
1,264
Safetensors
Model size
13B params
Tensor type
FP16
·

Dataset used to train kyujinpy/Korean-OpenOrca-v3