Edit model card

Mixture of Experts

Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.

Base Model

mistralai/Mistral-7B-Instruct-v0.2

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.

Experts :

Code codellama/CodeLlama-7b-hf

Code Llama is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 34 billion parameters. This is the repository for the base 7B version in the Hugging Face Transformers format. This model is designed for general code synthesis and understanding. Links to other models can be found in the index at the bottom.

WhiteRabbitNeo/WhiteRabbitNeo-7B-v1.5a

WhiteRabbitNeo is a model series that can be used for offensive and defensive cybersecurity.

MOE - MODEL - NEEDS 9gb vram - will test after downlaoding quantized !

Downloads last month
16
GGUF
Inference Examples
Inference API (serverless) does not yet support adapter-transformers models for this pipeline type.

Collection including LeroyDyer/Mixtral_Coder_7b