KoT-platypus2-13B / README.md
kyujinpy's picture
Upload README.md
d6eb85d
|
raw
history blame
No virus
2.84 kB
metadata
language:
  - ko
datasets:
  - kyujinpy/KoCoT_2000
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-4.0

KoT-platypus2

img
CoT + KO-platypus2 = KoT-platypus2

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture
KoT-platypus2-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.

Repo Link
Github KoT-platypus: KoT-platypus2

Base Model
KO-Platypus2-7B-ex
More detail repo(Github): CoT-llama2
More detail repo(Github): KO-Platypus2

Training Dataset
I use KoCoT_2000.
Using DeepL, translate about kaist-CoT.

I use A100 GPU 40GB and COLAB, when trianing.

Training Hyperparameters

Hyperparameters Value
batch_size 64
micro_batch_size 1
Epochs 15
learning_rate 1e-5
cutoff_len 4096
lr_scheduler linear
base_model kyujinpy/KO-Platypus2-13B

Model Benchmark

KO-LLM leaderboard

img

Model Average Ko-ARC Ko-HellaSwag Ko-MMLU Ko-TruthfulQA Ko-CommonGen V2
KoT-Platypus2-13B(ours) NaN NaN NaN NaN NaN NaN
hyunseoki/ko-en-llama2-13b 46.68 42.15 54.23 38.90 40.74 57.39
CoTy-platypus-ko-12.8b 46.44 34.98 49.11 25.68 37.59 84.86
momo/polyglot-ko-12.8b-Chat-QLoRA-Merge 45.71 35.49 49.93 25.97 39.43 77.70
KoT-platypus2-7B 45.62 38.05 49.63 34.68 37.69 68.08

Compare with Top 4 SOTA models. (update: 10/05)

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/KoT-platypus2-13B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)

Readme format: beomi/llama-2-ko-7b