--- license: other base_model: - beomi/Llama-3-Open-Ko-8B-Instruct-preview - beomi/Llama-3-Open-Ko-8B library_name: transformers tags: - mergekit - merge - llama.cpp --- # πŸ‘‘ Llama-3-Open-Ko-Linear-8B-GGUF Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp) ## 🏝️ Merge Details "I thought about it yesterdayβ€”merging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs." ### πŸ‡°πŸ‡· Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) as a base. ### πŸ‡°πŸ‡· Models Merged The following models were included in the merge: * [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) ### πŸ““ Ollama ``` ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M ``` Change it to suit your taste. [Modelfile_Q5_K_M] ``` FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf TEMPLATE """ {{- if .System }} system {{ .System }} {{- end }} user Human: {{ .Prompt }} assistant Assistant: """ SYSTEM """ μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜. """ PARAMETER temperature 0.7 PARAMETER num_predict 3000 PARAMETER num_ctx 4096 PARAMETER stop "" PARAMETER stop "" PARAMETER top_k 50 PARAMETER top_p 0.95 ``` ### πŸ’Ύ Configuration The following YAML configuration was used to produce this model: ```yaml models: - layer_range: [0, 31] model: beomi/Llama-3-Open-Ko-8B parameters: weight: 0.2 - layer_range: [0, 31] model: beomi/Llama-3-Open-Ko-8B-Instruct-preview parameters: weight: 0.8 merge_method: task_arithmetic base_model: beomi/Llama-3-Open-Ko-8B dtype: bfloat16 random_seed: 0 ```