Edit model card

Mistral-7B-Instruct-v0.2-GGUF

Description

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

  • 32k context window (vs 8k context in v0.1)
  • Rope-theta = 1e6
  • No Sliding-Window Attention

For full details of this model please read our paper and release blog post.

Downloads last month
510
GGUF
Inference API
Input a message to start chatting with QuantFactory/Mistral-7B-Instruct-v0.2-GGUF.
Inference API (serverless) has been turned off for this model.

Quantized from