|
--- |
|
language: |
|
- ko |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
**The license is `cc-by-nc-sa-4.0`.** |
|
|
|
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์ผ๋ก ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค** |
|
|
|
# **๐ปโโ๏ธCOKAL_merged_test-v1-13B๐ปโโ๏ธ** |
|
![img](https://drive.google.com/uc?export=view&id=1Uwj17SlMfaE3fqiVFrnTOdnEWoZqYJmr) |
|
|
|
## Model Details |
|
|
|
**Model Developers** Seungyoo Lee(DopeorNope) |
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text only. |
|
|
|
**Model Architecture** |
|
COKAL_merged_test-v1-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. |
|
|
|
|
|
--- |
|
|
|
## **Base Model** |
|
|
|
[HumanF-MarkrAI/COKAL-DPO-13b-v2](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2) |
|
|
|
[MarkrAI/DopeorNope-maestro-v2-DPO-13b](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b) |
|
|
|
|
|
## **Implemented Method** |
|
|
|
I utilized `slerp merge` to smoothly blend the gradients of the base models to create it. |
|
|
|
The merging approach relies on some luck, but at the same time, if I have an accurate understanding of my model's performance, I can carefully select models that excel in each aspect to develop a well-balanced model. |
|
|
|
Thanks to [maywell](https://huggingface.co/maywell) for sharing useful tips related to the merge method. |
|
|
|
|
|
--- |
|
|
|
# **Model Benchmark** |
|
|
|
|
|
## KO-LLM leaderboard |
|
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). |
|
|
|
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | |
|
| --- | --- | --- | --- | --- | --- | --- | |
|
| COKAL_merged_test-v1-13B๐ปโโ๏ธ | 52.72 | 51.45 | 60.55 | 44.8 | 49.05 | 57.73 | |
|
| [COKAL-DPO-13b-v2๐ปโโ๏ธ](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2) | 52.69 | 54.95 | 63.02 | 43.98 | 51.67 | 49.82 | |
|
| [COKAL-DPO_test-v2-13b๐ปโโ๏ธ](https://huggingface.co/DopeorNope/COKAL-DPO_test-v2-13b) | 52.67 | 55.63 | 63.5 | 43.49 | 51.5 | 49.23 | |
|
| [hyeogi/Yi-6b-dpo-v0.2](https://huggingface.co/hyeogi/Yi-6b-dpo-v0.2) | 52.63 | 41.72 | 52.96 | 46.69 | 52.38 | 69.42 | |
|
| [DopeorNope-maestro-v2-DPO-13b๐ปโโ๏ธ](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b) | 49.42 | 45.14 | 56.69 | 41.37 | 42.26 | 61.63 | |
|
|
|
|
|
--- |
|
|
|
# Implementation Code |
|
|
|
|
|
## Load model |
|
```python |
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
repo = "DopeorNope/COKAL_merged_test-v1-13B" |
|
OpenOrca = AutoModelForCausalLM.from_pretrained( |
|
repo, |
|
return_dict=True, |
|
torch_dtype=torch.float16, |
|
device_map='auto' |
|
) |
|
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) |
|
``` |
|
## Prompt (Alpaca format) |
|
|
|
```python |
|
|
|
prompt= f"์๋๋ ๋ฌธ์ ๋ฅผ ์ค๋ช
ํ๋ ์ง์์ฌํญ๊ณผ, ๊ตฌ์ฒด์ ์ธ ๋ต๋ณ์ ๋ฐฉ์์ ์๊ตฌํ๋ ์
๋ ฅ์ด ํจ๊ป ์๋ ๋ฌธ์ฅ์
๋๋ค. ์ด ์์ฒญ์ ๋ํด ์ ์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.\n\n### ์ง์์ฌํญ:\n{instruction}\n\n### ์
๋ ฅ:\n{input}\n\n### ๋ต๋ณ:\n" |
|
|
|
prompt_no_input = f"์๋๋ ๋ฌธ์ ๋ฅผ ์ค๋ช
ํ๋ ์ง์์ฌํญ์
๋๋ค. ์ด ์์ฒญ์ ๋ํด ์ ์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.\n\n### ์ง์์ฌํญ:\n{instruction}\n\n### ๋ต๋ณ:\n" |
|
|
|
|
|
``` |
|
|
|
# Acknowledgement |
|
|
|
- ์ด ๋ชจ๋ธ์ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถยท๊ด์ฃผ๊ด์ญ์๊ฐ ๊ณต๋ ์ง์ํ '์ธ๊ณต์ง๋ฅ ์ค์ฌ ์ฐ์
์ตํฉ ์ง์ ๋จ์ง ์กฐ์ฑ์ฌ์
'์ผ๋ก ์ง์์ ๋ฐ์ ์ํ๋ ์ฐ๊ตฌ ๊ฒฐ๊ณผ์
๋๋ค. |
|
|
|
- This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City. |
|
|
|
|
|
--- |