File size: 969 Bytes
128eb6e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
license: apache-2.0
tags:
- math
- mistral
- llm
- gguf
- mathstral
- java
- mistral.java
---

# Pure quantizations of `Mathstral-7B-v0.1` for [mistral.java](https://github.com/mukel/mistral.java).

In the wild, Q8_0 quantizations are fine, but Q4_0 quantizations are rarely pure e.g. the output.weights tensor is quantized with Q6_K, instead of Q4_0.  
A pure Q4_0 quantization can be generated from a high precision (F32, F16, BFLOAT16) .gguf source with the quantize utility from llama.cpp as follows:

```
./llama-quantize --pure ./Mathstral-7B-v0.1-F32.gguf ./Mathstral-7B-v0.1-Q4_0.gguf Q4_0
```

Original model: [https://huggingface.co/mistralai/mathstral-7B-v0.1](https://huggingface.co/mistralai/mathstral-7B-v0.1)

****Note that this model does not support a System prompt.**

Mathstral 7B is a model specializing in mathematical and scientific tasks, based on Mistral 7B.
You can read more in the [official blog post](https://mistral.ai/news/mathstral/).