Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
Merge
mergekit
lazymergekit
Felladrin/Minueza-32M-Chat
conversational
Inference Endpoints
text-generation-inference
Edit model card

🌟 Buying me coffee is a direct way to show support for this project.

Mixnueza-6x32M-MoE

Mixnueza-6x32M-MoE is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

💻 Usage

from transformers import pipeline

generate = pipeline("text-generation", "Isotonic/Mixnueza-6x32M-MoE")

messages = [
    {
        "role": "system",
        "content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
    },
    {
        "role": "user",
        "content": "What are some potential applications for quantum computing?",
    },
]

prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

output = generate(
    prompt,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.65,
    top_k=35,
    top_p=0.55,
    repetition_penalty=1.176,
)

print(output[0]["generated_text"])
Downloads last month
2,515
Safetensors
Model size
83.9M params
Tensor type
F32
·

Datasets used to train Isotonic/Mixnueza-Chat-6x32M-MoE

Collection including Isotonic/Mixnueza-Chat-6x32M-MoE