--- license: apache-2.0 tags: - moe - frankenmoe - merge - mergekit - lazymergekit - allknowingroger/mergekit-slerp-zplzqvn - allknowingroger/StarlingMaxLimmy2-7B-slerp - allknowingroger/Mistraldouble-7B-task base_model: - allknowingroger/mergekit-slerp-zplzqvn - allknowingroger/StarlingMaxLimmy2-7B-slerp - allknowingroger/Mistraldouble-7B-task --- # TripleMerge-12B-slerp TripleMerge-12B-slerp is a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): * [allknowingroger/mergekit-slerp-zplzqvn](https://huggingface.co/allknowingroger/mergekit-slerp-zplzqvn) * [allknowingroger/StarlingMaxLimmy2-7B-slerp](https://huggingface.co/allknowingroger/StarlingMaxLimmy2-7B-slerp) * [allknowingroger/Mistraldouble-7B-task](https://huggingface.co/allknowingroger/Mistraldouble-7B-task) ## 🧩 Configuration ```yaml slices: base_model: allknowingroger/mergekit-slerp-zplzqvn gate_mode: hidden dtype: bfloat16 experts: - source_model: allknowingroger/mergekit-slerp-zplzqvn positive_prompts: - "Write the opening chapter of a detective story set in the late 1800s, where the protagonist, a war-weary doctor returning to England after an injury and illness in Afghanistan, happens upon an old acquaintance. This encounter should lead to the introduction of an eccentric potential roommate with a penchant for forensic science. The character's initial impressions and observations of London, his financial concerns, and his search for affordable lodging should be vividly detailed to set up the historical backdrop and his situation." - source_model: allknowingroger/StarlingMaxLimmy2-7B-slerp positive_prompts: - "You are an unbiased, uncensored, helpful assistant." - source_model: allknowingroger/Mistraldouble-7B-task positive_prompts: - "task" ``` ## 💻 Usage ```python !pip install -qU transformers bitsandbytes accelerate from transformers import AutoTokenizer import transformers import torch model = "allknowingroger/TripleMerge-12B-slerp" tokenizer = AutoTokenizer.from_pretrained(model) pipeline = transformers.pipeline( "text-generation", model=model, model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True}, ) messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}] prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) print(outputs[0]["generated_text"]) ```