---
license: other
license_name: other
license_link: LICENSE
---
---
# π‘ Joah-Llama-3-KoEn-64k-PoSE-8B-Coder-Reborn-v2
This model is merged using PoSE to extend Llama's context length to 64k.
[Reborn Merge Method](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2) is made and proposed by JayLee aka "asiansoul"
It is difficult to expand the context when merging with mergekit. Of course, there may be a method that I don't know about... but I confirmed that what I listed as an indicator in the image above worked.
You must carefully check that any context expansion that has not been confirmed like above image value is not true. check it out very deeply when see your huggingface target repo.
Since merging up to 256k is stretching the limits of my computer, I will try it later when I have a computer with good performance. If you have computer skills, give it a try....
check below article about Reborn.
π₯π₯ Noticeπ₯π₯
```
The Ollama modelfile used in my other merge files has an error. Please modify and use the modelfile as shown below.
asiansoul/Joah-Llama-3-KoEn-8B-Coder-v2-GGUF
asiansoul/Joah-Llama-3-KoEn-8B-Coder-v1-GGUF
asiansoul/YACHT-Llama-3-KoEn-8B-GGUF
....
--> all need to change below "Joah Modelfile_Q5_K_M" -> other model default num_ctx : 4096
π₯ after change, you can expereience better response than before. Test it~~~~~~
```
## π‘ Merge Details
The performance of this merge model doesn't seem to be bad though or more test.-> Just opinion ^^ ποΈ
What is important is that the context has been expanded.
The most important thing is that the merge method I created works whether it is normal or not.
### π€‘ Merge Method
Reborn Merge Method : Made by JayLee aka "asiansoul"
This model was merged using the [Reborn Merge Method](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
### π Models Merged
The following models were included in the merge:
* [winglian/Llama-3-8b-64k-PoSE](https://huggingface.co/winglian/Llama-3-8b-64k-PoSE)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
* [asiansoul/Joah-Llama-3-KoEn-8B-Coder-v2](https://huggingface.co/asiansoul/Joah-Llama-3-KoEn-8B-Coder-v2)
### π¦ Ollama Setup
π΄ How to extract a model file when you do not know the Ollama model file (Llama3 example)
```
(.venv) jaylee@lees-MacBook-Pro-2 youtube % ./ollama show --modelfile llama3
# Modelfile generated by "ollama show"
# To build a new Modelfile based on this one, replace the FROM line with:
# FROM llama3:latest
FROM /Users/jaylee/.ollama/models/blobs/sha256-00e1317cbf74d901080d7100f57580ba8dd8de57203072dc6f668324ba545f29
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
PARAMETER num_keep 24
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
π€ Joah Modelfile_Q5_K_M --> This model file works very well. change all other model based on this if that base is the llama3 series.
```
FROM joah-llama-3-koen-64k-pose-8b-coder-reborn-v2-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>
{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>
{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>
{{ .Response }}<|eot_id|>"""
SYSTEM """
μΉμ ν μ±λ΄μΌλ‘μ μλλ°©μ μμ²μ μ΅λν μμΈνκ³ μΉμ νκ² λ΅νμ. λͺ¨λ λλ΅μ νκ΅μ΄(Korean)μΌλ‘ λλ΅ν΄μ€.
"""
PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 64000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"
```
Ollama docker image create
```
ollama create joah -f ./Modelfile_Q5_K_M
```
### π Configuration
[Reborn Merge Method](https://medium.com/@puffanddmx82/reborn-elevating-model-adaptation-with-merging-for-superior-nlp-performance-f604e8e307b2)
```
reference_model_name = "winglian/Llama-3-8b-64k-PoSE"
base_model_name = "NousResearch/Meta-Llama-3-8B-Instruct"
target_model_name = "asiansoul/Joah-Llama-3-KoEn-8B-Coder-v2" # target model.
```