File size: 2,120 Bytes
7607938
 
7d7510c
 
 
 
 
 
 
3b7cbfc
7607938
bb02acf
ef2be3f
7d7510c
8f4184b
c2ccae8
7d7510c
 
 
 
 
 
 
 
 
 
 
 
 
bdcb68d
 
362cf27
bdcb68d
 
362cf27
0350800
5823e3d
 
bdcb68d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7d7510c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
---
license: other
base_model:
- beomi/Llama-3-Open-Ko-8B-Instruct-preview
- beomi/Llama-3-Open-Ko-8B
library_name: transformers
tags:
- mergekit
- merge
- llama.cpp
---

# ๐Ÿ‘‘ Llama-3-Open-Ko-Linear-8B-GGUF

Quantized by [llama.cpp](https://github.com/ggerganov/llama.cpp)

## ๐Ÿ๏ธ Merge Details

"I thought about it yesterdayโ€”merging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs."

### ๐Ÿ‡ฐ๐Ÿ‡ท Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [beomi/Llama-3-Open-Ko-8B](https://huggingface.co/beomi/Llama-3-Open-Ko-8B) as a base.

### ๐Ÿ‡ฐ๐Ÿ‡ท Models Merged

The following models were included in the merge:
* [beomi/Llama-3-Open-Ko-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-Open-Ko-8B-Instruct-preview)

### ๐Ÿ““ Ollama

```
ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M
```
Change it to suit your taste.

[Modelfile_Q5_K_M]
```
FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""

SYSTEM """
์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜.
"""

PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop "<s>"
PARAMETER stop "</s>"
PARAMETER top_k 50
PARAMETER top_p 0.95
```

### ๐Ÿ’พ Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B
    parameters:
      weight: 0.2
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
    parameters:
      weight: 0.8
merge_method: task_arithmetic
base_model: beomi/Llama-3-Open-Ko-8B
dtype: bfloat16
random_seed: 0
```