File size: 5,332 Bytes
01465da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60abacf
c2e1c6d
01465da
 
cc7ea76
e9ba9ba
cc7ea76
01465da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ff4c089
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
01465da
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
---
base_model:
- beomi/Llama-3-KoEn-8B-Instruct-preview
- asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
- NousResearch/Hermes-2-Pro-Llama-3-8B
- saltlux/Ko-Llama3-Luxia-8B
- defog/llama-3-sqlcoder-8b
- Locutusque/llama-3-neural-chat-v2.2-8B
- rombodawg/Llama-3-8B-Instruct-Coder
- NousResearch/Meta-Llama-3-8B-Instruct
- aaditya/Llama3-OpenBioLLM-8B
- rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
- cognitivecomputations/dolphin-2.9.1-llama-3-8b
- abacusai/Llama-3-Smaug-8B
- NousResearch/Meta-Llama-3-8B
library_name: transformers
tags:
- mergekit
- merge

---
# Joah-Remix-Llama-3-KoEn-8B-Reborn
<a href="https://ibb.co/nQndBFJ"><img src="https://i.ibb.co/WBP80LJ/remix.png" alt="remix" border="0"></a>

## Merge Details

"πŸ‘†" when i pose, You always does the "πŸ‘†"

### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.

### Models Merged

The following models were included in the merge:
* [beomi/Llama-3-KoEn-8B-Instruct-preview](https://huggingface.co/beomi/Llama-3-KoEn-8B-Instruct-preview)
* [asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn](https://huggingface.co/asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn)
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [saltlux/Ko-Llama3-Luxia-8B](https://huggingface.co/saltlux/Ko-Llama3-Luxia-8B)
* [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b)
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
* [rombodawg/Llama-3-8B-Instruct-Coder](https://huggingface.co/rombodawg/Llama-3-8B-Instruct-Coder)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [aaditya/Llama3-OpenBioLLM-8B](https://huggingface.co/aaditya/Llama3-OpenBioLLM-8B)
* [rombodawg/Llama-3-8B-Base-Coder-v3.5-10k](https://huggingface.co/rombodawg/Llama-3-8B-Base-Coder-v3.5-10k)
* [cognitivecomputations/dolphin-2.9.1-llama-3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9.1-llama-3-8b)
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)

### Ollama
Ollama Create
```
jaylee@lees-MacBook-Pro-2  % ./ollama create joah_remix -f ./Modelfile_Q5_K_M 
transferring model data 
creating model layer 
creating template layer 
creating system layer 
creating parameters layer 
creating config layer 
using already created layer sha256:4eadb53f0c70683aeab133c60d76b8ffc9f41ca5d49524d4b803c19e5ce7e3a5 
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f 
writing layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac 
writing layer sha256:74ef6315972b317734fe01e7e1ad5b49fce1fa8ed3978cb66501ecb8c3a2e984 
writing layer sha256:83882a5e957b8ce0d454f26bcedb2819413b49d6b967b28d60edb8ac61edfa58 
writing manifest 
success 
```

MODELFILE
```
FROM joah-remix-llama-3-koen-8b-reborn-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""


SYSTEM """
μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜.
"""

PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.65  
      weight: 0.25  
  
  - model: asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
    parameters:
      density: 0.6  
      weight: 0.2 
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.125 
  
  - model: saltlux/Ko-Llama3-Luxia-8B
    parameters:
      density: 0.55  
      weight: 0.125  
  - model: cognitivecomputations/dolphin-2.9.1-llama-3-8b
    parameters:
      density: 0.55  
      weight: 0.05
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.05
      
  - model: rombodawg/Llama-3-8B-Instruct-Coder
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
    parameters:
      density: 0.55  
      weight: 0.05      
  - model: defog/llama-3-sqlcoder-8b
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: Locutusque/llama-3-neural-chat-v2.2-8B
    parameters:
      density: 0.55  
      weight: 0.05 
  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.05 
  
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.55  
      weight: 0.05 
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16
```