πŸ‘‘ Llama-3-Open-Ko-Linear-8B-GGUF

Quantized by llama.cpp

🏝️ Merge Details

"I thought about it yesterdayβ€”merging the solid foundation of beomi/Llama-3-Open-Ko-8B with the specialized precision of beomi/Llama-3-Open-Ko-8B-Instruct-preview, using task arithmetic, is like composing a korean song that seamlessly blends timeless rhythms with contemporary solos, creating a harmonious masterpiece tailored to today's needs."

πŸ‡°πŸ‡· Merge Method

This model was merged using the task arithmetic merge method using beomi/Llama-3-Open-Ko-8B as a base.

πŸ‡°πŸ‡· Models Merged

The following models were included in the merge:

πŸ““ Ollama

ollama create Llama-3-Open-Ko-Linear-8B -f ./Modelfile_Q5_K_M

Change it to suit your taste.

[Modelfile_Q5_K_M]

FROM llama-3-open-ko-linear-8b-Q5_K_M.gguf
TEMPLATE """
{{- if .System }}
system
<s>{{ .System }}</s>
{{- end }}
user
<s>Human:
{{ .Prompt }}</s>
assistant
<s>Assistant:
"""

SYSTEM """
μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜.
"""

PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER num_ctx 4096
PARAMETER stop "<s>"
PARAMETER stop "</s>"
PARAMETER top_k 50
PARAMETER top_p 0.95

πŸ’Ύ Configuration

The following YAML configuration was used to produce this model:

models:
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B
    parameters:
      weight: 0.2
  - layer_range: [0, 31]
    model: beomi/Llama-3-Open-Ko-8B-Instruct-preview
    parameters:
      weight: 0.8
merge_method: task_arithmetic
base_model: beomi/Llama-3-Open-Ko-8B
dtype: bfloat16
random_seed: 0
Downloads last month
12
GGUF
Model size
8.03B params
Architecture
llama

5-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for asiansoul/Llama-3-Open-Ko-Linear-8B-GGUF