Edit model card

🌟 Buying me coffee is a direct way to show support for this project.

Mixnueza-6x32M-MoE

Mixnueza-6x32M-MoE is a Mixure of Experts (MoE) made with the following models using LazyMergekit:

💻 Usage

from transformers import pipeline

generate = pipeline("text-generation", "Isotonic/Mixnueza-6x32M-MoE")

messages = [
    {
        "role": "system",
        "content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
    },
    {
        "role": "user",
        "content": "What are some potential applications for quantum computing?",
    },
]

prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

output = generate(
    prompt,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.65,
    top_k=35,
    top_p=0.55,
    repetition_penalty=1.176,
)

print(output[0]["generated_text"])
Downloads last month
95
Safetensors
Model size
83.9M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Datasets used to train Isotonic/Mixnueza-Chat-6x32M-MoE

Collection including Isotonic/Mixnueza-Chat-6x32M-MoE