Text Generation
Transformers
Safetensors
English
mixtral
conversational
Inference Endpoints
text-generation-inference
Edit model card

Minueza-32Mx2-Chat

Recommended Prompt Format

<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{user_message}<|im_end|>
<|im_start|>assistant

Recommended Inference Parameters

do_sample: true
temperature: 0.65
top_p: 0.55
top_k: 35
repetition_penalty: 1.176

Usage Example

from transformers import pipeline

generate = pipeline("text-generation", "Felladrin/Minueza-32Mx2-Chat")

messages = [
    {
        "role": "system",
        "content": "You are a helpful assistant who answers the user's questions with details and curiosity.",
    },
    {
        "role": "user",
        "content": "What are some potential applications for quantum computing?",
    },
]

prompt = generate.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

output = generate(
    prompt,
    max_new_tokens=256,
    do_sample=True,
    temperature=0.65,
    top_k=35,
    top_p=0.55,
    repetition_penalty=1.176,
)

print(output[0]["generated_text"])

How it was trained

This model was trained with SFT Trainer and DPO Trainer, in several sessions, using the following settings:

For Supervised Fine-Tuning:

Hyperparameter Value
Learning rate 2e-6
Total train batch size 16
Max. sequence length 2048
Weight decay 0.01
Warmup ratio 0.1
Optimizer Adam with betas=(0.9,0.999) and epsilon=1e-08
Scheduler cosine
Seed 42
Neftune Noise Alpha 5

For Direct Preference Optimization:

Hyperparameter Value
Learning rate 5e-7
Total train batch size 16
Max. length 1024
Max. prompt length 512
Max. steps 200
Weight decay 0
Warmup ratio 0.1
Beta 0.1
Downloads last month
1,995
Safetensors
Model size
43M params
Tensor type
F32
·

Finetuned from

Datasets used to train Felladrin/Minueza-32Mx2-Chat

Spaces using Felladrin/Minueza-32Mx2-Chat 2

Collection including Felladrin/Minueza-32Mx2-Chat