--- language: - ko library_name: transformers pipeline_tag: text-generation license: cc-by-nc-sa-4.0 --- **The license is `cc-by-nc-sa-4.0`.** # **πŸ»β€β„οΈCOKAL_merged_test-v1-13BπŸ»β€β„οΈ** ![img](https://drive.google.com/uc?export=view&id=1Uwj17SlMfaE3fqiVFrnTOdnEWoZqYJmr) ## Model Details **Model Developers** Seungyoo Lee(DopeorNope) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** COKAL_merged_test-v1-13B is an auto-regressive language model based on the LLaMA2 transformer architecture. --- ## **Base Model** [HumanF-MarkrAI/COKAL-DPO-13b-v2](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2) [MarkrAI/DopeorNope-maestro-v2-DPO-13b](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b) ## **Implemented Method** I utilized `slerp merge` to smoothly blend the gradients of the base models to create it. The merging approach relies on some luck, but at the same time, if I have an accurate understanding of my model's performance, I can carefully select models that excel in each aspect to develop a well-balanced model. Thanks to [maywell](https://huggingface.co/maywell) for sharing useful tips related to the merge method. --- # **Model Benchmark** ## KO-LLM leaderboard - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 | | --- | --- | --- | --- | --- | --- | --- | | COKAL_merged_test-v1-13BπŸ»β€β„οΈ | 52.72 | 51.45 | 60.55 | 44.8 | 49.05 | 57.73 | | [COKAL-DPO-13b-v2πŸ»β€β„οΈ](https://huggingface.co/HumanF-MarkrAI/COKAL-DPO-13b-v2) | 52.69 | 54.95 | 63.02 | 43.98 | 51.67 | 49.82 | | [COKAL-DPO_test-v2-13bπŸ»β€β„οΈ](https://huggingface.co/DopeorNope/COKAL-DPO_test-v2-13b) | 52.67 | 55.63 | 63.5 | 43.49 | 51.5 | 49.23 | | [hyeogi/Yi-6b-dpo-v0.2](https://huggingface.co/hyeogi/Yi-6b-dpo-v0.2) | 52.63 | 41.72 | 52.96 | 46.69 | 52.38 | 69.42 | | [DopeorNope-maestro-v2-DPO-13bπŸ»β€β„οΈ](https://huggingface.co/MarkrAI/DopeorNope-maestro-v2-DPO-13b) | 49.42 | 45.14 | 56.69 | 41.37 | 42.26 | 61.63 | --- # Implementation Code ## Load model ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch repo = "DopeorNope/COKAL_merged_test-v1-13B" OpenOrca = AutoModelForCausalLM.from_pretrained( repo, return_dict=True, torch_dtype=torch.float16, device_map='auto' ) OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo) ``` ## Prompt (Alpaca format) ```python prompt= f"μ•„λž˜λŠ” 문제λ₯Ό μ„€λͺ…ν•˜λŠ” μ§€μ‹œμ‚¬ν•­κ³Ό, ꡬ체적인 닡변을 방식을 μš”κ΅¬ν•˜λŠ” μž…λ ₯이 ν•¨κ»˜ μžˆλŠ” λ¬Έμž₯μž…λ‹ˆλ‹€. 이 μš”μ²­μ— λŒ€ν•΄ μ μ ˆν•˜κ²Œ λ‹΅λ³€ν•΄μ£Όμ„Έμš”.\n\n### μ§€μ‹œμ‚¬ν•­:\n{instruction}\n\n### μž…λ ₯:\n{input}\n\n### λ‹΅λ³€:\n" prompt_no_input = f"μ•„λž˜λŠ” 문제λ₯Ό μ„€λͺ…ν•˜λŠ” μ§€μ‹œμ‚¬ν•­μž…λ‹ˆλ‹€. 이 μš”μ²­μ— λŒ€ν•΄ μ μ ˆν•˜κ²Œ λ‹΅λ³€ν•΄μ£Όμ„Έμš”.\n\n### μ§€μ‹œμ‚¬ν•­:\n{instruction}\n\n### λ‹΅λ³€:\n" ``` ---