Cartinoe5930
commited on
Commit
β’
d358026
1
Parent(s):
70de20a
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,125 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
+
datasets:
|
4 |
+
- Cartinoe5930/KoRAE_filtered_12k
|
5 |
+
language:
|
6 |
+
- ko
|
7 |
+
library_name: transformers
|
8 |
---
|
9 |
+
|
10 |
+
## KoRAE
|
11 |
+
|
12 |
+
<p align="center"><img src="https://cdn-uploads.huggingface.co/production/uploads/63e087b6a98d931aa90c1b9c/XQ-pNzRDRccd7UFgYDOrx.png", width='300', height='300'></p>
|
13 |
+
|
14 |
+
We introduce **KoRAE** which finetuned with filtered high-quality Korean dataset.
|
15 |
+
|
16 |
+
The **KoRAE** is output of combination of high-quality data which filtered by special data filtering method and Korean Llama-2 that Korean vocabularis were added.
|
17 |
+
We utilized special data filtering methods which introduced in [AlpaGasus](https://arxiv.org/abs/2307.08701) to filter high-quality data from mixture of several Korean datasets(OpenOrca-KO, KOpen-Platypus, KoCoT_2000, databricks-dolly-15k-ko).
|
18 |
+
We finetuned [Korean Llama-2](https://huggingface.co/beomi/llama-2-koen-13b) that introduced by [@beomi](https://huggingface.co/beomi) on the filtered dataset.
|
19 |
+
The Flash-Attention2 and LoRA were utilized for efficient finetuning.
|
20 |
+
|
21 |
+
The finding of KoRAE is as follows:
|
22 |
+
|
23 |
+
1. The finetuning in some epochs showed that high-quality filtered data has positive effects on model's performance. However, finetuning in a few epochs, the quantity of data is more matter than quality. It seems to be due to the lack of performance of the Korean base model. Therefore, the research to improve the Korean base model must continue.
|
24 |
+
2. The model trained with DPO showed best performance among KoRAE variants. This shows that DPO is clearly effective in the Korean LLM.
|
25 |
+
3. The model finetuned with filtered high-quality KoRAE showed better performance than without. Therefore, for better LLM, we should try to finetune the LLM with high-quality data.
|
26 |
+
|
27 |
+
## Model Details
|
28 |
+
|
29 |
+
- **Developed by:** [Cartinoe5930](https://huggingface.co/Cartinoe5930)
|
30 |
+
- **Base model:** [beomi/llama-2-koen-13b](https://huggingface.co/beomi/llama-2-koen-13b)
|
31 |
+
- **Repository:** [gauss5930/KoRAE](https://github.com/gauss5930/KoRAE)
|
32 |
+
|
33 |
+
For more details, please check the GitHub Repository!
|
34 |
+
|
35 |
+
## Training Details
|
36 |
+
|
37 |
+
- **Hardward:** We utilized A100 80G for finetuning
|
38 |
+
- **Training factors:** The [Transformers Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) and [Huggingface PEFT](https://huggingface.co/docs/peft/index) were utilized for finetuning.
|
39 |
+
- **Training Details:** DPO training 1 epoch on [ko_Ultrafeedback_binarized](https://huggingface.co/datasets/maywell/ko_Ultrafeedback_binarized) dataset. [KoRAE-13b](https://huggingface.co/Cartinoe5930/KoRAE-13b) model was used.
|
40 |
+
|
41 |
+
For more details, please check the GitHub Repository!
|
42 |
+
|
43 |
+
## Training Dataset
|
44 |
+
|
45 |
+
The KoRAE was finetuned with KoRAE dataset filtered high-quality dataset.
|
46 |
+
This dataset is a combination of the publicly available Koraen dataset and a filtering method was applied to the result of the combination dataset.
|
47 |
+
For more information, please refer to the [dataset card](https://huggingface.co/datasets/Cartinoe5930/KoRAE_filtered_12k) of KoRAE.
|
48 |
+
|
49 |
+
## Open Ko-LLM Leaderboard
|
50 |
+
|
51 |
+
|Model|Average|Ko-ARC|Ko-HellaSwag|Ko-MMLU|Ko-TruthfulQA|Ko-CommonGen V2|
|
52 |
+
|---|---|---|---|---|---|---|
|
53 |
+
|KoRAE-13b-DPO|48.71|46.5|57.54|42.87|41.28|55.37|
|
54 |
+
|
55 |
+
## Prompt Template
|
56 |
+
|
57 |
+
```
|
58 |
+
### System:
|
59 |
+
{system_prompt}
|
60 |
+
|
61 |
+
### User:
|
62 |
+
{instruction + input}
|
63 |
+
|
64 |
+
### Assistant:
|
65 |
+
{output}
|
66 |
+
```
|
67 |
+
|
68 |
+
## Usage example
|
69 |
+
|
70 |
+
```python
|
71 |
+
# Use a pipeline as a high-level helper
|
72 |
+
from transformers import pipeline
|
73 |
+
import torch
|
74 |
+
|
75 |
+
pipe = pipeline("text-generation", model="Cartinoe5930/KoRAE-13b", torch_dtype=torch.bfloat16, device_map="auto")
|
76 |
+
messages = [
|
77 |
+
{
|
78 |
+
"role": "system",
|
79 |
+
"content": "λΉμ μ μ μ©ν μΈκ³΅μ§λ₯ λΉμμ
λλ€. μ¬μ©μκ° λͺ κ°μ§ μ§μκ° ν¬ν¨λ μμ
μ μ 곡ν©λλ€. μμ²μ μ μ ν μλ£νλ μλ΅μ μμ±νμΈμ.",
|
80 |
+
},
|
81 |
+
{"role": "user", "content": "μ€νΈλ μ€λ₯Ό ν΄μνλ 5κ°μ§ λ°©λ²μ λν΄μ μ€λͺ
ν΄μ€."}
|
82 |
+
]
|
83 |
+
|
84 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
85 |
+
outputs = pipe(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
|
86 |
+
print(outputs[0]["generated_text"])
|
87 |
+
```
|
88 |
+
|
89 |
+
## Citation
|
90 |
+
|
91 |
+
- [KO-Platypus](https://github.com/Marker-Inc-Korea/KO-Platypus)
|
92 |
+
- [Korean-OpenOrca](https://github.com/Marker-Inc-Korea/Korean-OpenOrca)
|
93 |
+
|
94 |
+
```
|
95 |
+
@inproceedings{lee2023kullm,
|
96 |
+
title={KULLM: Learning to Construct Korean Instruction-following Large Language Models},
|
97 |
+
author={Lee, SeungJun and Lee, Taemin and Lee, Jeongwoo and Jang, Yoona and Lim, Heuiseok},
|
98 |
+
booktitle={Annual Conference on Human and Language Technology},
|
99 |
+
pages={196--202},
|
100 |
+
year={2023},
|
101 |
+
organization={Human and Language Technology}
|
102 |
+
}
|
103 |
+
```
|
104 |
+
|
105 |
+
```
|
106 |
+
@misc{chen2023alpagasus,
|
107 |
+
title={AlpaGasus: Training A Better Alpaca with Fewer Data},
|
108 |
+
author={Lichang Chen and Shiyang Li and Jun Yan and Hai Wang and Kalpa Gunaratna and Vikas Yadav and Zheng Tang and Vijay Srinivasan and Tianyi Zhou and Heng Huang and Hongxia Jin},
|
109 |
+
year={2023},
|
110 |
+
eprint={2307.08701},
|
111 |
+
archivePrefix={arXiv},
|
112 |
+
primaryClass={cs.CL}
|
113 |
+
}
|
114 |
+
```
|
115 |
+
|
116 |
+
```
|
117 |
+
@misc {l._junbum_2023,
|
118 |
+
author = { {L. Junbum, Taekyoon Choi} },
|
119 |
+
title = { llama-2-koen-13b },
|
120 |
+
year = 2023,
|
121 |
+
url = { https://huggingface.co/beomi/llama-2-koen-13b },
|
122 |
+
doi = { 10.57967/hf/1280 },
|
123 |
+
publisher = { Hugging Face }
|
124 |
+
}
|
125 |
+
```
|