File size: 3,625 Bytes
24d656d 9e0cff4 24d656d dcab9ab 24d656d 77b0997 24d656d f5ddd6f 24d656d f5ddd6f 24d656d 60edbe3 24d656d 56ce433 24d656d 371dc80 cb9cb04 24d656d f5ddd6f 24d656d 4763a58 24d656d f5ddd6f 24d656d f5ddd6f 24d656d dcab9ab 24d656d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 |
---
language:
- ko
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
tags:
- merge
---
**The license is `cc-by-nc-sa-4.0`.**
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์ผ๋ก ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค**
# **๐ปโโ๏ธCOKAL_merged_test-v1-13B๐ปโโ๏ธ**
![img](https://drive.google.com/uc?export=view&id=1Uwj17SlMfaE3fqiVFrnTOdnEWoZqYJmr)
## Model Details
**Model Developers** Seungyoo Lee(DopeorNope)
**Input** Models input text only.
**Output** Models generate text only.
**Model Architecture**
COKAL_merged_test-v1-13B is an auto-regressive language model based on the LLaMA2 transformer architecture.
---
## **Base Model**
[HumanF-MarkrAI/COKAL-DPO-13b-v2](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2)
[MarkrAI/DopeorNope-maestro-v2-DPO-13b](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b)
## **Implemented Method**
I utilized `slerp merge` to smoothly blend the gradients of the base models to create it.
The merging approach relies on some luck, but at the same time, if I have an accurate understanding of my model's performance, I can carefully select models that excel in each aspect to develop a well-balanced model.
Thanks to [maywell](https://huggingface.co/maywell) for sharing useful tips related to the merge method.
---
# **Model Benchmark**
## KO-LLM leaderboard
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
| Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
| --- | --- | --- | --- | --- | --- | --- |
| COKAL_merged_test-v1-13B๐ปโโ๏ธ | 52.72 | 51.45 | 60.55 | 44.8 | 49.05 | 57.73 |
| [COKAL-DPO-13b-v2๐ปโโ๏ธ](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2) | 52.69 | 54.95 | 63.02 | 43.98 | 51.67 | 49.82 |
| [COKAL-DPO_test-v2-13b๐ปโโ๏ธ](https://huggingface.co/DopeorNope/COKAL-DPO_test-v2-13b) | 52.67 | 55.63 | 63.5 | 43.49 | 51.5 | 49.23 |
| [hyeogi/Yi-6b-dpo-v0.2](https://huggingface.co/hyeogi/Yi-6b-dpo-v0.2) | 52.63 | 41.72 | 52.96 | 46.69 | 52.38 | 69.42 |
| [DopeorNope-maestro-v2-DPO-13b๐ปโโ๏ธ](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b) | 49.42 | 45.14 | 56.69 | 41.37 | 42.26 | 61.63 |
---
# Implementation Code
## Load model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "DopeorNope/COKAL_merged_test-v1-13B"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```
## Prompt (Alpaca format)
```python
prompt= f"์๋๋ ๋ฌธ์ ๋ฅผ ์ค๋ช
ํ๋ ์ง์์ฌํญ๊ณผ, ๊ตฌ์ฒด์ ์ธ ๋ต๋ณ์ ๋ฐฉ์์ ์๊ตฌํ๋ ์
๋ ฅ์ด ํจ๊ป ์๋ ๋ฌธ์ฅ์
๋๋ค. ์ด ์์ฒญ์ ๋ํด ์ ์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.\n\n### ์ง์์ฌํญ:\n{instruction}\n\n### ์
๋ ฅ:\n{input}\n\n### ๋ต๋ณ:\n"
prompt_no_input = f"์๋๋ ๋ฌธ์ ๋ฅผ ์ค๋ช
ํ๋ ์ง์์ฌํญ์
๋๋ค. ์ด ์์ฒญ์ ๋ํด ์ ์ ํ๊ฒ ๋ต๋ณํด์ฃผ์ธ์.\n\n### ์ง์์ฌํญ:\n{instruction}\n\n### ๋ต๋ณ:\n"
```
# Acknowledgement
- ์ด ๋ชจ๋ธ์ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถยท๊ด์ฃผ๊ด์ญ์๊ฐ ๊ณต๋ ์ง์ํ '์ธ๊ณต์ง๋ฅ ์ค์ฌ ์ฐ์
์ตํฉ ์ง์ ๋จ์ง ์กฐ์ฑ์ฌ์
'์ผ๋ก ์ง์์ ๋ฐ์ ์ํ๋ ์ฐ๊ตฌ ๊ฒฐ๊ณผ์
๋๋ค.
- This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City.
--- |