Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Boundary-Solar-Chat-2x10.7B-MoE - GGUF

Original model description:

license: apache-2.0 tags: - moe - merge - mergekit - NousResearch/Nous-Hermes-2-SOLAR-10.7B - upstage/SOLAR-10.7B-Instruct-v1.0 - llama - Llama base_model: - NousResearch/Nous-Hermes-2-SOLAR-10.7B - upstage/SOLAR-10.7B-Instruct-v1.0

Boundary-Solar-Chat-2x10.7B-MoE

Boundary-Solar-Chat-2x10.7B-MoE is a Mixture of Experts (MoE) made with the following models:

🧩 Configuration

base_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
dtype: float16
gate_mode: cheap_embed
experts:
  - source_model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
    positive_prompts: ["You are a helpful general assistant."]
  - source_model: upstage/SOLAR-10.7B-Instruct-v1.0
    positive_prompts: ["You are assistant for question and answering."]

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "NotAiLOL/Boundary-Solar-Chat-2x10.7B-MoE"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
125
GGUF
Model size
19.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .