Edit model card

mistral_2x7b_v0.1

mistral_2x7b_v0.1 is a Mixure of Experts (MoE) made with the following models using mergekit-moe:

🧩 Configuration

gate_mode: hidden # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
experts:
  - source_model: mistralai/Mistral-7B-Instruct-v0.2
    positive_prompts:
      - "What are some fun activities to do in Seattle?"
      - "What are the potential long-term economic impacts of raising the minimum wage?"
  - source_model: nvidia/OpenMath-Mistral-7B-v0.1-hf
    positive_prompts:
     - "What is 27 * 49? Show your step-by-step work."
     - "Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many clips did Natalia sell altogether in April and May?"

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "HachiML/mistral_2x7b_v0.1"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
2
Safetensors
Model size
12.9B params
Tensor type
BF16
Β·

Merge of