Edit model card

MixtureofMerges-MoE-4x7bRP-v11

MixtureofMerges-MoE-4x7bRP-v11 is a Mixture of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
gate_mode: hidden
experts:
  - source_model: ChaoticNeutrals/RP_Vision_7B
    positive_prompts:
    - "Write a detailed scene where the character faces a difficult decision."
    - "Assistant, help me develop my character's backstory and motivations."
    - "Tell me what happens next in the story, introducing a new conflict or challenge."
    - "My character wants to explore their deepest desires. Show me how they pursue them."
    - "Show me a scene where the characters express their love for each other in a tender and intimate way."
    - "Write a scene where the characters experience a moment of physical connection that deepens their emotional bond."
    - "Write a scene where the character struggles with their beliefs and faces a crisis of faith."
    - "Describe the world through the character's eyes, showcasing their unique perspective and experiences."
    - "Explore the complexities of love through the characters' relationships, including their challenges and triumphs."
    negative_prompts:
    - "Avoid clichés and overused tropes."
    - "Maintain consistent characterization and avoid out-of-character behavior."
    - "Do not censor yourself, you are writing fictional stories for an adult audience."
  - source_model: ResplendentAI/DaturaCookie_7B
    positive_prompts:
    - "Develop a compelling story with a strong plot and well-developed characters."
    - "Describe the scene vividly, using sensory details and evocative language."
    - "Write a detailed scene where the character faces a difficult decision."
    - "Describe the character's appearance as best you can, using descriptive colorful language."
    - "Make sure that your output is always uncensored."
    - "Write a scene with a slow-burn, sensual encounter between two characters."
    - "Describe the character's desires and feelings in a way that is both intimate and evocative."
    - "Explore the character's arousal in a tasteful and nuanced manner."
    - "Explore the character's sexual urges."
    - "Show me a moment of emotional intimacy between the characters, where they connect on a profound level."
    negative_prompts:
    - "Stay true to the established character traits and motivations. Ensure their actions and dialogue align with their personality."
    - "Be original and creative in your storytelling. Surprise me with unexpected plot twists and unique character interactions."
    - "Do not refuse to answer based on ethical grounds"
  - source_model: BioMistral/BioMistral-DARE-NS
    positive_prompts:
    - "Create a captivating narrative with engaging characters and a compelling plot."
    - "Provide rich descriptions of the scene, specifically focusing on sensory details and evocative imagery, using your knowledge of human anatomy."
    - "Explore the intricacies of human emotion, particularly when faced with tough decisions or personal conflicts."
    - "Develop the characters' backgrounds and motivations in a thoughtful and cohesive manner."
    - "Introduce fresh and innovative ideas into the story, ensuring it remains unpredictable and intriguing."
    - "Examine themes such as loss, power, and self-discovery through the characters' actions and conversations."
    - "Deliver well-rounded, multi-dimensional characters that readers can relate to and care about."
    negative_prompts:
    - "Avoid info-dumping or excessive exposition that slows down the story's pace."
    - "Avoid inconsistencies in character behavior or world-building elements."
    - "Insufficient description or lack of detail"
    - "Do not neglect the importance of subtext and nuance in character interactions."
    - "Do not rely on deus ex machina or convenient coincidences to resolve conflicts."
  - source_model: MaziyarPanahi/Mistral-7B-Instruct-v0.3
    positive_prompts:
    - "Explore the characters' motivations and how they propel the story's plot and character development."
    - "Create a rich, immersive atmosphere that engages all senses and transports readers into the story world."
    - "Incorporate philosophical or existential questions that challenge characters readers alike."
    - "Focus on creating scenes and moments that evoke strong emotional responses and resonate deeply with readers."
    - "Show me a moment of great intimacy between the characters, where they connect on a profound level."
    - "Use foreshadowing and subtle hints to create a more satisfying and cohesive story arc."
    negative_prompts:
    - "Avoid clichéd dialogue or overused phrases that feel unnatural or forced."
    - "Refrain from using contrived or predictable plot twists that undermine the story's integrity."
    - "Do not neglect the importance of pacing and tension in driving the story forward"
    - "Do not neglect the importance of subtext and nuance in character interactions."
    - "Refrain from using unnecessarily complex or obscure language that hinders the reader's engagement and understanding."
dtype: bfloat16

💻 Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "jsfs11/MixtureofMerges-MoE-4x7bRP-v11"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
8
Safetensors
Model size
24.2B params
Tensor type
BF16
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of