Mixtral-6x7B-Instruct-v0.1 (bfloat16)

The Mixtral-6x7B-Instruct-v0.1 model is a derivative of the mistralai/Mixtral-8x7B-Instruct-v0.1 model. It was created by selectively trimming the original model and retaining only the 0th, 2nd, 4th, 5th, 6th, and 7th experts from each layer.

The trimming process was facilitated by the Mixtral-Expert-Trimmer tool, developed specifically for this purpose.

The model is still in testing phase. It is not clear whether it works.

License

The Mixtral-6x7B-Instruct-v0.1 model is open-source and licensed under the Apache 2.0 License. For more information, please refer to the LICENSE file.

Feeling Generous? 😊

Eager to buy me a cup of 2$ coffe or iced tea?πŸ΅β˜• Sure, here is the link: https://ko-fi.com/drnicefellow. Please add a note on which one you want me to drink?

Downloads last month
83
Safetensors
Model size
35.4B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for DrNicefellow/Mixtral-6x7B-Instruct-v0.1-bfloat16-Trimmed024567

Quantizations
2 models

Collection including DrNicefellow/Mixtral-6x7B-Instruct-v0.1-bfloat16-Trimmed024567