Text Generation
Transformers
PyTorch
Korean
llama
text-generation-inference
Inference Endpoints
COKAL-DPO-13b-v2 / README.md
DopeorNope's picture
Create README.md
daf4ff7
|
raw
history blame
2.82 kB
metadata
language:
  - ko
datasets:
  - DopeorNope/DPO-Ko-Dataset
  - DopeorNope/Orca_Near_Dedup-v2
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0

(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다
The license is cc-by-nc-sa-4.0.

🐻‍❄️COKAL-DPO_13b-v2🐻‍❄️

img

Model Details

Model Developers Seungyoo Lee (DopeorNope)

Input Models input text only.

Output Models generate text only.

Model Architecture
COKAL-DPO_test-v2 is an auto-regressive 13B language model based on the LLaMA2 transformer architecture.

Base Model DopeorNope/COKAL_pre_DPO_Test_v1-13b

DopeorNope/COKAL_pre_DPO_Test_v2-13b is the SFT model to train with DPO methodology.

Training Dataset

This dataset was constructed by directly collecting and reorganizing data by DopeorNope, obtaining insights from "lvwerra/stack-exchange-paired" to create a paired dataset. (It means I do not use stack-exchange-paired; I just got an insight from it.)

This dataset is based on "kyujinpy/OpenOrca-KO" and has been processed using the Near Dedup algorithm to remove items with a Jaccard Similarity threshold of 0.8 or higher. In addition, inconsistent inputs have been cleaned and modified.

Training
The difference between "DopeorNope/COKAL-DPO_test-v2" and this model is that this model has different hyperparameters from the one in that setting when it comes to the final version.

I developed the model in an environment with four RTX 3090 GPUs running Ubuntu 18.04.

It seems that when uploading the model directly to a repository from a Linux server, there may be an issue causing the model to appear to have more parameters. However, this model is based on a 13B architecture.

Reference papers

Implementation Code


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "HumanF-MarkrAI/COKAL-DPO-13b-v2"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)