LeroyDyer's picture
Update README.md
9762d51 verified
|
raw
history blame
1.17 kB
metadata
license: apache-2.0
language:
  - en
library_name: transformers
tags:
  - biology
  - legal
  - not-for-all-audiences
  - medical
  - chemistry
  - moe
  - merge
  - music
  - Cyber-Series
  - Mixture-Of-Experts

Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining. gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.

Base Model mistralai/Mistral-7B-Instruct-v0.2 The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.