Edit model card

Pure quantizations of Mistral-7B-Instruct-v0.3 for mistral.java.

In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:

./quantize --pure ./Mistral-7B-Instruct-v0.3-F32.gguf ./Mistral-7B-Instruct-v0.3-Q4_0.gguf Q4_0

Original model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3

**Note that this model does not support a System prompt.

The Mistral-7B-Instruct-v0.3 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.3.
Mistral-7B-v0.3 has the following changes compared to Mistral-7B-v0.2

  • Extended vocabulary to 32768
  • Supports v3 Tokenizer
  • Supports function calling
Downloads last month
97
GGUF
Model size
7.25B params
Architecture
llama

4-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including mukel/Mistral-7B-Instruct-v0.3-GGUF