|
--- |
|
license: other |
|
base_model: |
|
- beomi/Llama-3-Open-Ko-8B-Instruct-preview |
|
- beomi/Llama-3-Open-Ko-8B |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- llama.cpp |
|
--- |
|
|
|
# ๐ Llama-3-Open-Ko-Linear-8B-GGUF |
|
|
|
Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp) |
|
|
|
## ๐๏ธ Merge Details |
|
|
|
"I thought about it yesterdayโmerging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs." |
|
|
|
### ๐ฐ๐ท Merge Method |
|
|
|
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) as a base. |
|
|
|
### ๐ฐ๐ท Models Merged |
|
|
|
The following models were included in the merge: |
|
* [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview) |
|
|
|
### ๐ Ollama |
|
|
|
``` |
|
ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M |
|
``` |
|
Change it to suit your taste. |
|
|
|
[Modelfile_Q5_K_M] |
|
``` |
|
FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf |
|
TEMPLATE """ |
|
{{- if .System }} |
|
system |
|
<s>{{ .System }}</s> |
|
{{- end }} |
|
user |
|
<s>Human: |
|
{{ .Prompt }}</s> |
|
assistant |
|
<s>Assistant: |
|
""" |
|
|
|
SYSTEM """ |
|
์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค. |
|
""" |
|
|
|
PARAMETER temperature 0.7 |
|
PARAMETER num_predict 3000 |
|
PARAMETER num_ctx 4096 |
|
PARAMETER stop "<s>" |
|
PARAMETER stop "</s>" |
|
PARAMETER top_k 50 |
|
PARAMETER top_p 0.95 |
|
``` |
|
|
|
### ๐พ Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- layer_range: [0, 31] |
|
model: beomi/Llama-3-Open-Ko-8B |
|
parameters: |
|
weight: 0.2 |
|
- layer_range: [0, 31] |
|
model: beomi/Llama-3-Open-Ko-8B-Instruct-preview |
|
parameters: |
|
weight: 0.8 |
|
merge_method: task_arithmetic |
|
base_model: beomi/Llama-3-Open-Ko-8B |
|
dtype: bfloat16 |
|
random_seed: 0 |
|
``` |