Quantization made by Richard Erkhov.
TinyQwex-4x620M-MoE - GGUF
- Model creator: https://huggingface.co/Isotonic/
- Original model: https://huggingface.co/Isotonic/TinyQwex-4x620M-MoE/
Name | Quant method | Size |
---|---|---|
TinyQwex-4x620M-MoE.Q2_K.gguf | Q2_K | 0.49GB |
TinyQwex-4x620M-MoE.IQ3_XS.gguf | IQ3_XS | 0.54GB |
TinyQwex-4x620M-MoE.IQ3_S.gguf | IQ3_S | 0.56GB |
TinyQwex-4x620M-MoE.Q3_K_S.gguf | Q3_K_S | 0.56GB |
TinyQwex-4x620M-MoE.IQ3_M.gguf | IQ3_M | 0.57GB |
TinyQwex-4x620M-MoE.Q3_K.gguf | Q3_K | 0.6GB |
TinyQwex-4x620M-MoE.Q3_K_M.gguf | Q3_K_M | 0.6GB |
TinyQwex-4x620M-MoE.Q3_K_L.gguf | Q3_K_L | 0.64GB |
TinyQwex-4x620M-MoE.IQ4_XS.gguf | IQ4_XS | 0.67GB |
TinyQwex-4x620M-MoE.Q4_0.gguf | Q4_0 | 0.69GB |
TinyQwex-4x620M-MoE.IQ4_NL.gguf | IQ4_NL | 0.7GB |
TinyQwex-4x620M-MoE.Q4_K_S.gguf | Q4_K_S | 0.7GB |
TinyQwex-4x620M-MoE.Q4_K.gguf | Q4_K | 0.73GB |
TinyQwex-4x620M-MoE.Q4_K_M.gguf | Q4_K_M | 0.73GB |
TinyQwex-4x620M-MoE.Q4_1.gguf | Q4_1 | 0.76GB |
TinyQwex-4x620M-MoE.Q5_0.gguf | Q5_0 | 0.82GB |
TinyQwex-4x620M-MoE.Q5_K_S.gguf | Q5_K_S | 0.82GB |
TinyQwex-4x620M-MoE.Q5_K.gguf | Q5_K | 0.84GB |
TinyQwex-4x620M-MoE.Q5_K_M.gguf | Q5_K_M | 0.84GB |
TinyQwex-4x620M-MoE.Q5_1.gguf | Q5_1 | 0.88GB |
TinyQwex-4x620M-MoE.Q6_K.gguf | Q6_K | 0.96GB |
TinyQwex-4x620M-MoE.Q8_0.gguf | Q8_0 | 1.24GB |
Original model description:
license: apache-2.0 tags: - moe - merge - mergekit - lazymergekit - Qwen/Qwen1.5-0.5B
TinyQwex-4x620M-MoE
TinyQwex-4x620M-MoE is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
🌟 Buying me coffee is a direct way to show support for this project.
💻 Usage
!pip install -qU transformers bitsandbytes accelerate eniops
from transformers import AutoTokenizer
import transformers
import torch
model = "Isotonic/TinyQwex-4x620M-MoE"
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-0.5B")
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.bfloat16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
🧩 Configuration
experts:
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "reasoning"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "program"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "storytelling"
- source_model: Qwen/Qwen1.5-0.5B
positive_prompts:
- "Instruction following assistant"