Edit model card

Joah-Llama-3-KoEn-8B-Coder-v2

Screenshot-2024-05-11-at-10-10-21-PM

Screenshot-2024-05-11-at-10-55-27-PM

This model is merged using PoSE to extend Llama's context length to 64k.

Reborn Merge Method is made and proposed by JayLee aka "asiansoul"

It is difficult to expand the context when merging with mergekit. Of course, there may be a method that I don't know about... but I confirmed that what I listed as an indicator in the image above worked.

You must carefully check that any context expansion that has not been confirmed like above image value is not true. check it out very deeply when see your huggingface target repo.

Since merging up to 256k is stretching the limits of my computer, I will try it later when I have a computer with good performance. If you have computer skills, give it a try....

check below article about Reborn.

🎑 Merge Details

The performance of this merge model doesn't seem to be bad though or more test.-> Just opinion ^^ 🏟️

What is important is that the context has been expanded.

The most important thing is that the merge method I created works whether it is normal or not.

Merge Method

Reborn Merge Method : Made by JayLee aka "asiansoul"

This model was merged using the Reborn Merge Method

Models Merged

The following models were included in the merge:

Configuration

Reborn Merge Method

reference_model_name = "winglian/Llama-3-8b-64k-PoSE"
base_model_name = "NousResearch/Meta-Llama-3-8B-Instruct"
target_model_name = "asiansoul/Joah-Llama-3-KoEn-8B-Coder-v2"  # target model.
Downloads last month
7
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.