Edit model card

Model Card for Model ID

This is a mixture of experts created with mergekit and based on mistralai/Mistral-7B-v0.1.

Model Details

Model Description

Model Sources

Created using mergekit with the following configuration:

base_model: mistralai/Mistral-7B-Instruct-v0.2
dtype: float16
gate_mode: cheap_embed
experts:
  - source_model: HuggingFaceH4/zephyr-7b-beta
    positive_prompts: ["You are an helpful general-pupose assistant."]
  - source_model: mistralai/Mistral-7B-Instruct-v0.2
    positive_prompts: ["You are helpful assistant."]
  - source_model: teknium/OpenHermes-2.5-Mistral-7B
    positive_prompts: ["You are helpful a coding assistant."]
  - source_model: meta-math/MetaMath-Mistral-7B
    positive_prompts: ["You are an assistant good at math."]

The method and code used to quantize the model is explained here: Maixtchup: Make Your Own Mixture of Experts with Mergekit

Uses

This model is pre-trained and not fine-tuned. You may fine-tune it with PEFT using adapters.

Model Card Contact

The Kaitchup

Downloads last month
3,284
Safetensors
Model size
24.2B params
Tensor type
FP16
·