File size: 986 Bytes
ec1039b 0dd2830 ec1039b 46b4702 ec1039b 0dd2830 ec1039b 0dd2830 ec1039b 0dd2830 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
library_name: peft
base_model: EleutherAI/polyglot-ko-5.8b
---
# ChaeMs/KoRani-5.8b
<p align="center"><img width="150" alt="image" src="https://github.com/chaeminsoo/QnA_GPT/assets/79351899/b9a78bf5-0d80-435a-ba14-e288e8886f99"></p>
<!-- Provide a quick summary of what the model is/does. -->
The korean language model finetuned from EleutherAI's polyglot-ko-5.8b, using QLoRA method.
It trained about 100k instruction data.
Instruction data was made using "네이버 지식인" and ShareGPT data.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Chae Minsu](https://github.com/chaeminsoo)
- **Model type:** Text Generation
- **Language(s) (NLP):** Korean
- **Finetuned from model:** EleutherAI/polyglot-ko-5.8b
- **Training Data:** [네이버 지식인 data made by beomi](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a), [ShareGPT data made by junelee](https://huggingface.co/datasets/junelee/sharegpt_deepl_ko) |