Update README.md
Browse files
README.md
CHANGED
@@ -51,11 +51,11 @@ I have found that most of merge's model outside so far do not actually have 64k
|
|
51 |
|
52 |
If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
|
53 |
|
54 |
-
### Merge Method
|
55 |
|
56 |
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
|
57 |
|
58 |
-
### Models Merged
|
59 |
|
60 |
The following models were included in the merge:
|
61 |
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
|
@@ -68,7 +68,41 @@ The following models were included in the merge:
|
|
68 |
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
|
69 |
* [asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1](https://huggingface.co/asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1)
|
70 |
|
71 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
The following YAML configuration was used to produce this model:
|
74 |
|
|
|
51 |
|
52 |
If you support me, i will try it on a computer with maximum specifications, also, i would like to conduct great tests by building a network with high-capacity traffic and high-speed 10G speeds for you.
|
53 |
|
54 |
+
### π§Ά Merge Method
|
55 |
|
56 |
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
|
57 |
|
58 |
+
### π Models Merged
|
59 |
|
60 |
The following models were included in the merge:
|
61 |
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
|
|
|
68 |
* [Locutusque/llama-3-neural-chat-v2.2-8B](https://huggingface.co/Locutusque/llama-3-neural-chat-v2.2-8B)
|
69 |
* [asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1](https://huggingface.co/asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1)
|
70 |
|
71 |
+
### πΉ Ollama
|
72 |
+
|
73 |
+
Modelfile_Q5_K_M
|
74 |
+
```
|
75 |
+
FROM joah-llama-3-koen-8b-coder-v2-Q5_K_M.gguf
|
76 |
+
TEMPLATE """
|
77 |
+
{{- if .System }}
|
78 |
+
system
|
79 |
+
<s>{{ .System }}</s>
|
80 |
+
{{- end }}
|
81 |
+
user
|
82 |
+
<s>Human:
|
83 |
+
{{ .Prompt }}</s>
|
84 |
+
assistant
|
85 |
+
<s>Assistant:
|
86 |
+
"""
|
87 |
+
|
88 |
+
SYSTEM """
|
89 |
+
μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.
|
90 |
+
"""
|
91 |
+
|
92 |
+
PARAMETER temperature 0.7
|
93 |
+
PARAMETER num_predict 3000
|
94 |
+
PARAMETER num_ctx 4096
|
95 |
+
PARAMETER stop "<s>"
|
96 |
+
PARAMETER stop "</s>"
|
97 |
+
```
|
98 |
+
|
99 |
+
```
|
100 |
+
ollama create joah -f ./Modelfile_Q5_K_M
|
101 |
+
```
|
102 |
+
|
103 |
+
Modelfile_Q5_K_M default, i hope you to test many upload file for my repo to change that and create ollama
|
104 |
+
|
105 |
+
### π Configuration
|
106 |
|
107 |
The following YAML configuration was used to produce this model:
|
108 |
|