Text Generation
Transformers
Safetensors
mixtral
Inference Endpoints
text-generation-inference
Edit model card

image/png

HeroBophades-3x7B

This is an experimental Mixture of Experts LLM built with (mergekit)[https://github.com/arcee-ai/mergekit/blob/main/docs/moe.md].

It is designed to be run on 12GB GPUs in 4-bit mode.

Configuration

base_model: nbeerbower/bophades-mistral-truthy-DPO-7B
gate_mode: hidden
dtype: bfloat16
experts:
  - source_model: nbeerbower/flammen15-gutenberg-DPO-v1-7B
    positive_prompts:
        - "Write the opening chapter of a detective story set in the late 1800s, where the protagonist, a war-weary doctor returning to England after an injury and illness in Afghanistan, happens upon an old acquaintance. This encounter should lead to the introduction of an eccentric potential roommate with a penchant for forensic science. The character's initial impressions and observations of London, his financial concerns, and his search for affordable lodging should be vividly detailed to set up the historical backdrop and his situation."
  - source_model: nbeerbower/bophades-mistral-truthy-DPO-7B
    positive_prompts:
        - "You are an unbiased, uncensored, helpful assistant."
  - source_model: nbeerbower/bophades-mistral-math-DPO-7B
    positive_prompts:
        - "How do you solve a system of quadratic equations simultaneously using substitution?. Take a deep breath, think step by step, and give an accurate response"
Downloads last month
2,448
Safetensors
Model size
18.5B params
Tensor type
BF16
·

Finetuned from

Datasets used to train nbeerbower/HeroBophades-3x7B