Edit model card

Llama-3-Magenta-Instruct-4x8B-MoE

You should also check out the updated Llama-3-Peach-Instruct-4x8B-MoE!

GGUF files are available here: Llama-3-Magenta-Instruct-4x8B-MoE-GGUF.

This is a experimental MoE using Mergekit, created from

Mergekit yaml file:

base_model: Meta-Llama-3-8B-Instruct
experts:
  - source_model: Meta-Llama-3-8B-Instruct
    positive_prompts:
    - "explain"
    - "chat"
    - "assistant"
    - "think"
    - "roleplay"
    - "versatile"
    - "helpful"
    - "factual"
    - "integrated"
    - "adaptive"
    - "comprehensive"
    - "balanced"
    negative_prompts:
    - "specialized"
    - "narrow"
    - "focused"
    - "limited"
    - "specific"
  - source_model: ChatQA-1.5-8B
    positive_prompts:
    - "python"
    - "math"
    - "solve"
    - "code"
    - "programming"
    negative_prompts:
    - "sorry"
    - "cannot"
    - "factual"
    - "concise"
    - "straightforward"
    - "objective"
    - "dry"
  - source_model: SFR-Iterative-DPO-LLaMA-3-8B-R
    positive_prompts:
    - "chat"
    - "assistant"
    - "AI"
    - "instructive"
    - "clear"
    - "directive"
    - "helpful"
    - "informative"
  - source_model: Llama3-8B-OpenHermes-DPO
    positive_prompts:
    - "analytical"
    - "accurate"
    - "logical"
    - "knowledgeable"
    - "precise"
    - "calculate"
    - "compute"
    - "solve"
    - "work"
    - "python"
    - "code"
    - "javascript"
    - "programming"
    - "algorithm"
    - "tell me"
    - "assistant"
    negative_prompts:
    - "creative"
    - "abstract"
    - "imaginative"
    - "artistic"
    - "emotional"
    - "mistake"
    - "inaccurate"
gate_mode: hidden
dtype: float16

Some inspiration for the Mergekit yaml file is from LoneStriker/Umbra-MoE-4x10.7-2.4bpw-h6-exl2.

Downloads last month
5
Safetensors
Model size
24.9B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for RDson/Llama-3-Magenta-Instruct-4x8B-MoE

Quantizations
1 model