koalpaca-polyglot-PEFT-ex
This model is the result of the PEFT study.
The study was conducted to understand how to fine-tune the open-source LLM using QLoRA.
This model is not made for the general performance.
So, it is not recommended to use it in practice.
Model Details
This model is trained to answer the question about a certain library.
The model is finetuned from "KoAlpaca".
It only trained 0.099% of parameters, using the QLoRA method.
Even Though it trained a small number of parameters, the model gives an actual answer about the data that was trained.
Model Description
- Developed by: Chae Minsu
- Model type: Text Generation
- Language(s) (NLP): Korean
- Finetuned from model: beomi/polyglot-ko-12.8b-safetensors
- Training Data: Snippet of Kyungpook National University Library FAQ