nayohan's picture
Update README.md
08a90ad
---
license: apache-2.0
datasets:
- DILAB-HYU/KoQuality
language:
- ko
pipeline_tag: text-generation
tags:
- polyglot-ko
- gpt-neox
- KoQuality
base_model: EleutherAI/polyglot-ko-5.8b
---
This model is a instruct-tuned poylglot-ko-5.8b model, using full [Kullm, OIG, KoAlpaca] Instruction dataset.
koquality_raw.json -> 410step
## Training hyperparameters
- learning_rate: 5e-5
- train_batch_size: 2
- seed: 42
- distributed_type: multi-GPU (A30 24G) + CPU Offloading
- num_devices: 2
- gradient_accumulation_steps: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
## Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.11.0
- deepspeed 0.9.5
-