asiansoul's picture
Update README.md
ff4c089 verified
metadata
base_model:
  - beomi/Llama-3-KoEn-8B-Instruct-preview
  - asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
  - NousResearch/Hermes-2-Pro-Llama-3-8B
  - saltlux/Ko-Llama3-Luxia-8B
  - defog/llama-3-sqlcoder-8b
  - Locutusque/llama-3-neural-chat-v2.2-8B
  - rombodawg/Llama-3-8B-Instruct-Coder
  - NousResearch/Meta-Llama-3-8B-Instruct
  - aaditya/Llama3-OpenBioLLM-8B
  - rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
  - cognitivecomputations/dolphin-2.9.1-llama-3-8b
  - abacusai/Llama-3-Smaug-8B
  - NousResearch/Meta-Llama-3-8B
library_name: transformers
tags:
  - mergekit
  - merge

Joah-Remix-Llama-3-KoEn-8B-Reborn

remix

Merge Details

"πŸ‘†" when i pose, You always does the "πŸ‘†"

Merge Method

This model was merged using the DARE TIES merge method using NousResearch/Meta-Llama-3-8B as a base.

Models Merged

The following models were included in the merge:

Ollama

Ollama Create

jaylee@lees-MacBook-Pro-2  % ./ollama create joah_remix -f ./Modelfile_Q5_K_M 
transferring model data 
creating model layer 
creating template layer 
creating system layer 
creating parameters layer 
creating config layer 
using already created layer sha256:4eadb53f0c70683aeab133c60d76b8ffc9f41ca5d49524d4b803c19e5ce7e3a5 
using already created layer sha256:8ab4849b038cf0abc5b1c9b8ee1443dca6b93a045c2272180d985126eb40bf6f 
writing layer sha256:ae2974c64ea5d6f488eeb1b10717a270f48fb3452432589db6f5e60472ae96ac 
writing layer sha256:74ef6315972b317734fe01e7e1ad5b49fce1fa8ed3978cb66501ecb8c3a2e984 
writing layer sha256:83882a5e957b8ce0d454f26bcedb2819413b49d6b967b28d60edb8ac61edfa58 
writing manifest 
success 

MODELFILE

FROM joah-remix-llama-3-koen-8b-reborn-Q5_K_M.gguf
TEMPLATE """{{ if .System }}<|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|>{{ end }}{{ if .Prompt }}<|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|>{{ end }}<|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""


SYSTEM """
μΉœμ ˆν•œ μ±—λ΄‡μœΌλ‘œμ„œ μƒλŒ€λ°©μ˜ μš”μ²­μ— μ΅œλŒ€ν•œ μžμ„Έν•˜κ³  μΉœμ ˆν•˜κ²Œ λ‹΅ν•˜μž. λͺ¨λ“  λŒ€λ‹΅μ€ ν•œκ΅­μ–΄(Korean)으둜 λŒ€λ‹΅ν•΄μ€˜.
"""

PARAMETER num_keep 24
PARAMETER temperature 0.7
PARAMETER num_predict 3000
PARAMETER stop "<|start_header_id|>"
PARAMETER stop "<|end_header_id|>"
PARAMETER stop "<|eot_id|>"

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NousResearch/Meta-Llama-3-8B
    # Base model providing a general foundation without specific parameters
  - model: NousResearch/Meta-Llama-3-8B-Instruct
    parameters:
      density: 0.65  
      weight: 0.25  
  
  - model: asiansoul/Joah-Llama-3-MAAL-MLP-KoEn-8B-Reborn
    parameters:
      density: 0.6  
      weight: 0.2 
  - model: beomi/Llama-3-KoEn-8B-Instruct-preview
    parameters:
      density: 0.55  
      weight: 0.125 
  
  - model: saltlux/Ko-Llama3-Luxia-8B
    parameters:
      density: 0.55  
      weight: 0.125  
  - model: cognitivecomputations/dolphin-2.9.1-llama-3-8b
    parameters:
      density: 0.55  
      weight: 0.05
  - model: aaditya/Llama3-OpenBioLLM-8B
    parameters:
      density: 0.55  
      weight: 0.05
      
  - model: rombodawg/Llama-3-8B-Instruct-Coder
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: rombodawg/Llama-3-8B-Base-Coder-v3.5-10k
    parameters:
      density: 0.55  
      weight: 0.05      
  - model: defog/llama-3-sqlcoder-8b
    parameters:
      density: 0.55  
      weight: 0.05  
  - model: Locutusque/llama-3-neural-chat-v2.2-8B
    parameters:
      density: 0.55  
      weight: 0.05 
  - model: NousResearch/Hermes-2-Pro-Llama-3-8B
    parameters:
      density: 0.55  
      weight: 0.05 
  
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.55  
      weight: 0.05 
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  int8_mask: true
dtype: bfloat16