Mixtral-8x7B-v0.1 / README.md
diegolascasas's picture
Update README.md
f59f1c8
|
raw
history blame
1.09 kB
metadata
license: apache-2.0
language:
  - fr
  - it
  - de
  - es
  - en

Model Card for Mistral-8x7B

The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.

For full details of this model please read our release blog post.

Warning

This repo contains weights that are compatible with vLLM serving of the model. Please note that model cannot (yet) be instantiated with HF.

Notice

Mistral-8x7B is a pretrained base model and therefore does not have any moderation mechanisms.

The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Ana, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Louis Ternon, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Théophile Gervet, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.