File size: 3,085 Bytes
267ede5 0c67391 8a05605 f83e285 6798686 9de8330 c06b349 4362194 4d07429 b1837d8 ae5f87e 4d3a0f8 b9b34a7 4d3a0f8 32c6f52 4d3a0f8 32c6f52 2642e4c 4d3a0f8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 |
---
license: other
license_name: other
license_link: LICENSE
---
<a href="https://ibb.co/ThHYWwy"><img src="https://i.ibb.co/Jkzm3cZ/Screenshot-2024-05-20-at-4-21-39-PM.png" alt="Screenshot-2024-05-20-at-4-21-39-PM" border="0"></a>
Model Mixed by [Solo Merge Method](https://medium.com/@puffanddmx82/enhancing-language-models-with-dynamic-attention-version-2-84ef8adc3646)
Keep in mind that the accuracy of your desired questions may vary for this merge.
Regardless of whether the idea of new merge method is good or bad, I believe that the actual result of what i thought is of great significance.
Once again, there is no right answer for the famous LLM. The correct answer is what you choose based on your evidence with so many real human random test.
It is good to rely on the evaluation result score, but in LLM, the most important thing is what you actually feel after taking your real fact random test.
The gap is bigger than I thought...
If you keep going with the wrong first button, you could end up in a black hole from which you can never escape...
By the time you realize it, itโs already too late...
When looking at an LLM, don't trust others, trust yourself by real fact check.
### Models Merged
The following models were included in the merge:
* [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
## Ollama Create
```
(.venv) jaylee@lees-MacBook-Pro-2 youtube % ./ollama create solo -f ./Modelfile_Q5_K_M
transferring model data
creating model layer
creating template layer
creating system layer
creating parameters layer
creating config layer
using already created layer sha256:1acd536b4123837aee2f43ffde8a697f842be5ab4d789ab6787a7887291c4bb3
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f
using already created layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac
using already created layer sha256:74ef6315972b317734fe01e7e1ad5b49fce1fa8ed3978cb66501ecb8c3a2e984
writing layer sha256:88698c3b47bc90bf85949d927c7555efe424e666ef9bd94550bcbde9c4f94489
writing manifest
success
```
## Ollama Modelfile
change based on your preference
```
FROM solo-llama-3-maal-mlp-koen-8b-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """
์น์ ํ ์ฑ๋ด์ผ๋ก์ ์๋๋ฐฉ์ ์์ฒญ์ ์ต๋ํ ์์ธํ๊ณ ์น์ ํ๊ฒ ๋ตํ์. ๋ชจ๋ ๋๋ต์ ํ๊ตญ์ด(Korean)์ผ๋ก ๋๋ตํด์ค.
"""
PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
|