File size: 498 Bytes
16bc0f9
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
# Mixtral 7b 8 Expert

![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e3b6ab0c2a907c388e4965/6m3e2d2BNXDjy6_qHd2LT.png)

This is a preliminary HuggingFace implementation of the newly released MoE model by MistralAi. Make sure to  load with `trust_remote_code=True`.

Thanks to @dzhulgakov for his early implementation (https://github.com/dzhulgakov/llama-mistral) that helped me find a working setup.

Come chat about this in our [Disco(rd)](https://discord.gg/S8W8B5nz3v)! :)