--- license: apache-2.0 language: - en ---

## Mixtral Experts with DeepSeek-MoE Architecture [![Discord](https://img.shields.io/discord/1156064224225808488?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FtCMkMDDHwm)](https://discord.gg/cognitivecomputations) Discord: https://discord.gg/cognitivecomputations This is a direct extraction of the 8 experts from [Mixtral-8x7b-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), and a transfer of them into the DeepSeek-MoE Architecture. - **Expert Configuration:** It is 2 experts per token. - **Performance:** Performance is identical to instruct, if not a little better. - **Evaluations:** Evals will come when compute clears up, it also appears more malleable to training. - **Experimentation:** This is the first of a few MoE expert extraction and modification projects we're working on, more to come. Enjoy. ## Instruction Format To leverage instruction fine-tuning, your prompts should be enclosed with `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin-of-sentence id, while subsequent instructions should not. Assistant generation will conclude with an end-of-sentence token id. ### Example ```plaintext text = "[INST] What is your favourite condiment? [/INST]" "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!" "[INST] Do you have mayonnaise recipes? [/INST]" ``` ### Applying the Chat Template This format can be implemented using the `apply_chat_template()` method from the `transformers` library: ```python from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto # Load the model and tokenizer model = AutoModelForCausalLM.from_pretrained("cognitivecomputations/DeepMixtral-8x7b-Instruct", trust_remote_code=True, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("cognitivecomputations/DeepMixtral-8x7b-Instruct") # Define the conversation messages messages = [ {"role": "user", "content": "What is your favourite condiment?"}, {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"}, {"role": "user", "content": "Do you have mayonnaise recipes?"} ] # Apply chat template encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt") model_inputs = encodeds.to(device) model.to(device) # Generate response generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True) decoded = tokenizer.batch_decode(generated_ids) print(decoded[0]) ``` Special Thanks: Eric Hartford, and Fernando Neto. - Lucas Atkins (Crystalcareai)