Edit model card

Screenshot-2024-05-20-at-4-21-39-PM

Model Mixed by Solo Merge Method

Keep in mind that the accuracy of your desired questions may vary for this merge.

Regardless of whether the idea of new merge method is good or bad, I believe that the actual result of what i thought is of great significance.

Once again, there is no right answer for the famous LLM. The correct answer is what you choose based on your evidence with so many real human random test.

It is good to rely on the evaluation result score, but in LLM, the most important thing is what you actually feel after taking your real fact random test.

The gap is bigger than I thought...

If you keep going with the wrong first button, you could end up in a black hole from which you can never escape...

By the time you realize it, itโ€™s already too late...

When looking at an LLM, don't trust others, trust yourself by real fact check.

Models Merged

The following models were included in the merge:

Ollama Create

(.venv) jaylee@lees-MacBook-Pro-2 youtube % ./ollama create solo -f ./Modelfile_Q5_K_M 
transferring model data 
creating model layer 
creating template layer 
creating system layer 
creating parameters layer 
creating config layer 
using already created layer sha256:1acd536b4123837aee2f43ffde8a697f842be5ab4d789ab6787a7887291c4bb3 
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f 
using already created layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac 
using already created layer sha256:74ef6315972b317734fe01e7e1ad5b49fce1fa8ed3978cb66501ecb8c3a2e984 
writing layer sha256:88698c3b47bc90bf85949d927c7555efe424e666ef9bd94550bcbde9c4f94489 
writing manifest 
success 

Ollama Modelfile

change based on your preference

FROM solo-llama-3-maal-mlp-koen-8b-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""


SYSTEM """
์นœ์ ˆํ•œ ์ฑ—๋ด‡์œผ๋กœ์„œ ์ƒ๋Œ€๋ฐฉ์˜ ์š”์ฒญ์— ์ตœ๋Œ€ํ•œ ์ž์„ธํ•˜๊ณ  ์นœ์ ˆํ•˜๊ฒŒ ๋‹ตํ•˜์ž. ๋ชจ๋“  ๋Œ€๋‹ต์€ ํ•œ๊ตญ์–ด(Korean)์œผ๋กœ ๋Œ€๋‹ตํ•ด์ค˜.
"""

PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
Downloads last month
262
GGUF
Model size
8.17B params
Architecture
llama