--- license: apache-2.0 language: - fr - it - de - es - en inference: false --- # Model Card for Mixtral-8x7B The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested. For full details of this model please read our [release blog post](https://mistral.ai/news/mixtral-of-experts/). ## Instruction format This format must be strictly respected, otherwise the model will generate sub-optimal outputs. The template used to build a prompt for the Instruct model is defined as follows: ``` [INST] Instruction [/INST] Model answer [INST] Follow-up instruction [/INST] ``` Note that `` and `` are special tokens for beginning of string (BOS) and end of string (EOS) while [INST] and [/INST] are regular strings. ## Run the model