Edit model card

πŸ‘‘ Llama-3-Open-Ko-Linear-8B

🏝️ Merge Details

"I thought about it yesterdayβ€”merging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs."

πŸ‡°πŸ‡· Merge Method

This model was merged using the task arithmetic merge method using beomi/Llama-3-Open-Ko-8B as a base.

πŸ‡°πŸ‡· Models Merged

The following models were included in the merge:

πŸ’Ύ Configuration

The following YAML configuration was used to produce this model:

models:
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B
    parameters:
      weight: 0.2
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
    parameters:
      weight: 0.8
merge_method: task_arithmetic
base_model: beomi/Llama-3-Open-Ko-8B
dtype: bfloat16
random_seed: 0
Downloads last month
23
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·

Merge of