Edit model card

microchar_moe

microchar_moe is a Mixture of Experts (MoE) made with the following models using LazyMergekit:

🧩 Configuration

base_model: Corianas/Microllama_Char_88k_step 
gate_mode: random # one of "hidden", "cheap_embed", or "random"
dtype: bfloat16 # output dtype (float32, float16, or bfloat16)
## (optional)
# experts_per_token: 2
experts:
  - source_model: Corianas/Microllama_Char_88k_step
    positive_prompts:
      - ""
    ## (optional)
    # negative_prompts:
    #   - "This is a prompt expert_model_1 should not be used for"
  - source_model: Corianas/Microllama_Char_88k_step
    positive_prompts:
      - "" 

πŸ’» Usage

!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "Corianas/microchar_moe"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
6
Safetensors
Model size
142M params
Tensor type
BF16
Β·

Merge of