metadata
library_name: transformers
datasets:
- kyujinpy/OpenOrca-KO
pipeline_tag: text-generation
Llama-3-Ko-OpenOrca
Model Details
Model Description
Original model: beomi/Llama-3-Open-Ko-8B
Dataset: kyujinpy/OpenOrca-KO
Training details
Training: Axolotl을 이용해 LoRA-8bit로 4epoch 학습 시켰습니다.
- sequence_len: 4096
- bf16
학습 시간: A6000x2, 6시간
Evaluation
Tasks | n-shot | Metric | Value | Stderr | |
---|---|---|---|---|---|
kobest_boolq | 5 | acc | 0.7123 | ± | 0.0121 |
kobest_copa | 5 | acc | 0.7620 | ± | 0.0135 |
kobest_hellaswag | 5 | acc | 0.4780 | ± | 0.0224 |
kobest_sentineg | 5 | acc | 0.9446 | ± | 0.0115 |
kobest_wic | 5 | acc | 0.6103 | ± | 0.0137 |