Edit model card

perlthoughts/Mistral-7B-Instruct-v0.2-2x7B-MoE AWQ

Model Summary

Mistral 7B Instruct v0.2 7B (with only 2 experts)

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an improved instruct fine-tuned version of Mistral-7B-Instruct-v0.1.

For full details of this model please read our paper and release blog post.

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.

E.g.

text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"

This format is available as a chat template via the apply_chat_template() method

Downloads last month
8
Safetensors
Model size
1.95B params
Tensor type
I32
·
FP16
·
Inference API
Input a message to start chatting with solidrust/Mixtral-Instruct-v0.2-2x7B-AWQ.
Inference API (serverless) has been turned off for this model.

Collection including solidrust/Mixtral-Instruct-v0.2-2x7B-AWQ

Evaluation results