|
--- |
|
language: |
|
- ko |
|
datasets: |
|
- DopeorNope/DPO-Ko-Dataset |
|
- DopeorNope/Orca_Near_Dedup-v2 |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
license: cc-by-nc-sa-4.0 |
|
--- |
|
|
|
**(์ฃผ)๋ฏธ๋์ด๊ทธ๋ฃน์ฌ๋๊ณผ์ฒ๊ณผ (์ฃผ)๋ง์ปค์ LLM ์ฐ๊ตฌ ์ปจ์์์์ผ๋ก ๊ฐ๋ฐ๋ ๋ชจ๋ธ์
๋๋ค** |
|
|
|
|
|
**The license is `cc-by-nc-sa-4.0`.** |
|
|
|
# **๐ปโโ๏ธCOKAL-DPO_13b-v2๐ปโโ๏ธ** |
|
|
|
![img](https://drive.google.com/uc?export=view&id=1YGBxz-UhQGHZ2K6cTXmTnB13fRgaQilX) |
|
|
|
## Model Details |
|
|
|
**Model Developers** Seungyoo Lee (DopeorNope) |
|
|
|
|
|
|
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text only. |
|
|
|
**Model Architecture** |
|
COKAL-DPO_13b-v2 is an auto-regressive 13B language model based on the LLaMA2 transformer architecture. |
|
|
|
**Base Model** [DopeorNope/COKAL_pre_DPO_Test_v2-13b](https://huggingface.co/DopeorNope/COKAL_pre_DPO_Test_v2-13b) |
|
|
|
DopeorNope/COKAL_pre_DPO_Test_v2-13b is the SFT model to train with DPO methodology. |
|
|
|
**Training Dataset** |
|
- DPO training dataset: [DopeorNope/DPO-Ko-Dataset](private) - private |
|
|
|
This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from ["lvwerra/stack-exchange-paired"](https://huggingface.co/datasets/lvwerra/stack-exchange-paired) to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.) |
|
|
|
- SFT training dataset: [DopeorNope/Orca_Near_Dedup-v2](private) - private |
|
|
|
This dataset is based on ["kyujinpy/OpenOrca-KO"](https://huggingface.co/datasets/kyujinpy/OpenOrca-KO) and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified. |
|
|
|
**Training** |
|
The difference between "DopeorNope/COKAL-DPO_test-v2" and this model is that this model has different hyper-parameters from the one in that setting regarding the final version. |
|
|
|
I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04. |
|
|
|
It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture. |
|
|
|
|
|
**Reference papers** |
|
|
|
- Data Strategy: |
|
- [LIMA(Zhou et al., 2023)](https://arxiv.org/abs/2305.11206) |
|
- [Near Dedup algorithm(Lee et al., 2022)](https://arxiv.org/abs/2107.06499) |
|
|
|
- Model Architecture: |
|
- [Llama2(Touvron et al., 2023)](https://arxiv.org/abs/2307.09288) |
|
|
|
|
|
# Implementation Code |
|
```python |
|
|
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
repo = "HumanF-MarkrAI/COKAL-DPO-13b-v2" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
repo, |
|
return_dict=True, |
|
torch_dtype=torch.float16, |
|
device_map='auto' |
|
) |
|
model_tokenizer = AutoTokenizer.from_pretrained(repo) |
|
``` |
|
|
|
|
|
# Acknowledgement |
|
|
|
- ์ด ๋ชจ๋ธ์ ๊ณผํ๊ธฐ์ ์ ๋ณดํต์ ๋ถยท๊ด์ฃผ๊ด์ญ์๊ฐ ๊ณต๋ ์ง์ํ '์ธ๊ณต์ง๋ฅ ์ค์ฌ ์ฐ์
์ตํฉ ์ง์ ๋จ์ง ์กฐ์ฑ์ฌ์
'์ผ๋ก ์ง์์ ๋ฐ์ ์ํ๋ ์ฐ๊ตฌ ๊ฒฐ๊ณผ์
๋๋ค. |
|
|
|
- This model was supported by Artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT(MSIT, Korea)&Gwangju Metropolitan City. |
|
|
|
|
|
|
|
--- |