nayohan's picture
Update README.md
5286b27
|
raw
history blame
2.57 kB
metadata
license: cc-by-4.0

KoQuality-Polyglot-5.8b

KoQuality-Polyglot-5.8b is a fine-tuned version of EleutherAI/polyglot-ko-5.8b on KoQuality dataset, which is curated by proposed method (len_group=5, k=100, n=0.01, method=ppl_sampling).

Overall Average accuracy score of the KoBEST datasets

We use KoBEST benchmark datasets(BoolQ, COPA, HellaSwag, SentiNeg, WiC) to compare the performance of our best model and other models accuracy. Our model outperforms other models in the average accuracy score of the KoBEST datasets.

Model 0-shot 1-shot 2-shot 5-shot 10-shot
polyglot-ko-5.8b 0.5587 0.5977 0.6138 0.6431 0.6457
koalpcaca-polyglot-5.8b 0.5085 0.5561 0.5768 0.6097 0.6059
kullm-polyglot-5.8b 0.5409 0.6072 0.5945 0.6345 0.6530
koquality-polyglot-5.8b 0.5472 0.5979 0.6260 0.6486 0.6535

Evaluation results

COPA (F1)

Model 0-shot 1-shot 2-shot 5-shot 10-shot
polyglot-ko-5.8b 0.5587 0.5977 0.6138 0.6431 0.6457
koalpcaca-polyglot-5.8b 0.5085 0.5561 0.5768 0.6097 0.6059
kullm-polyglot-5.8b 0.5409 0.6072 0.5945 0.6345 0.6530
koquality-polyglot-5.8b 0.5472 0.5979 0.6260 0.6486 0.6535

HellaSwag (F1)

BoolQ (F1)

SentiNeg (F1)

WiC (F1)

Training hyperparameters

  • learning_rate: 5e-5
  • train_batch_size: 4
  • seed: 42
  • distributed_type: multi-GPU (A100 80G)
  • num_devices: 4
  • gradient_accumulation_steps: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2.0

Framework versions

  • Transformers 4.30.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.11.0
  • deepspeed 0.9.5

Citation

@misc{2023koqaulity,
  title = {KoQuality: Curation of High-quality Instruction Data for Korean Language Models},
  author = {Na, Yohan and Kim, Dahye and Chae, Dong-Kyu},
  journal={Proceedings of the 35th Annual Conference on Human and Cognitive Language Technology (HCLT 2023)},
  pages={},
  year = {2023},
}