Update README.md
Browse files
README.md
CHANGED
@@ -3,104 +3,53 @@ base_model: mistralai/mathstral-7B-v0.1
|
|
3 |
license: apache-2.0
|
4 |
pipeline_tag: text-generation
|
5 |
quantized_by: bartowski
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
---
|
|
|
7 |
|
8 |
-
|
9 |
|
10 |
-
|
|
|
|
|
11 |
|
12 |
-
|
|
|
|
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
```
|
21 |
-
<s>[INST]
|
22 |
-
{system_prompt}
|
23 |
-
<</SYS>>
|
24 |
-
|
25 |
-
{prompt} [/INST] </s>
|
26 |
-
```
|
27 |
-
|
28 |
-
## Download a file (not the whole branch) from below:
|
29 |
-
|
30 |
-
| Filename | Quant type | File Size | Split | Description |
|
31 |
-
| -------- | ---------- | --------- | ----- | ----------- |
|
32 |
-
| [mathstral-7B-v0.1-f32.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-f32.gguf) | f32 | 28.99GB | false | Full F32 weights. |
|
33 |
-
| [mathstral-7B-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q8_0.gguf) | Q8_0 | 7.70GB | false | Extremely high quality, generally unneeded but max available quant. |
|
34 |
-
| [mathstral-7B-v0.1-Q6_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q6_K_L.gguf) | Q6_K_L | 6.01GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
35 |
-
| [mathstral-7B-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q6_K.gguf) | Q6_K | 5.95GB | false | Very high quality, near perfect, *recommended*. |
|
36 |
-
| [mathstral-7B-v0.1-Q5_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_L.gguf) | Q5_K_L | 5.22GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
37 |
-
| [mathstral-7B-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.14GB | false | High quality, *recommended*. |
|
38 |
-
| [mathstral-7B-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.00GB | false | High quality, *recommended*. |
|
39 |
-
| [mathstral-7B-v0.1-Q4_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_L.gguf) | Q4_K_L | 4.47GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
40 |
-
| [mathstral-7B-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.37GB | false | Good quality, default size for must use cases, *recommended*. |
|
41 |
-
| [mathstral-7B-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.14GB | false | Slightly lower quality with more space savings, *recommended*. |
|
42 |
-
| [mathstral-7B-v0.1-Q3_K_XL.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_XL.gguf) | Q3_K_XL | 3.94GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
43 |
-
| [mathstral-7B-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ4_XS.gguf) | IQ4_XS | 3.91GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
44 |
-
| [mathstral-7B-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.83GB | false | Lower quality but usable, good for low RAM availability. |
|
45 |
-
| [mathstral-7B-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.52GB | false | Low quality. |
|
46 |
-
| [mathstral-7B-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ3_M.gguf) | IQ3_M | 3.29GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
47 |
-
| [mathstral-7B-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.17GB | false | Low quality, not recommended. |
|
48 |
-
| [mathstral-7B-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ3_XS.gguf) | IQ3_XS | 3.02GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
49 |
-
| [mathstral-7B-v0.1-Q2_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q2_K_L.gguf) | Q2_K_L | 2.85GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
50 |
-
| [mathstral-7B-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q2_K.gguf) | Q2_K | 2.72GB | false | Very low quality but surprisingly usable. |
|
51 |
-
| [mathstral-7B-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ2_M.gguf) | IQ2_M | 2.50GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
52 |
-
|
53 |
-
## Credits
|
54 |
-
|
55 |
-
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
|
56 |
-
|
57 |
-
Thank you ZeroWw for the inspiration to experiment with embed/output
|
58 |
-
|
59 |
-
## Downloading using huggingface-cli
|
60 |
-
|
61 |
-
First, make sure you have hugginface-cli installed:
|
62 |
-
|
63 |
-
```
|
64 |
-
pip install -U "huggingface_hub[cli]"
|
65 |
-
```
|
66 |
-
|
67 |
-
Then, you can target the specific file you want:
|
68 |
-
|
69 |
-
```
|
70 |
-
huggingface-cli download bartowski/mathstral-7B-v0.1-GGUF --include "mathstral-7B-v0.1-Q4_K_M.gguf" --local-dir ./
|
71 |
```
|
72 |
|
73 |
-
|
74 |
-
|
75 |
-
```
|
76 |
-
huggingface-cli download bartowski/mathstral-7B-v0.1-GGUF --include "mathstral-7B-v0.1-Q8_0.gguf/*" --local-dir mathstral-7B-v0.1-Q8_0
|
77 |
-
```
|
78 |
-
|
79 |
-
You can either specify a new local-dir (mathstral-7B-v0.1-Q8_0) or download them all in place (./)
|
80 |
-
|
81 |
-
## Which file should I choose?
|
82 |
-
|
83 |
-
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
84 |
-
|
85 |
-
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
|
86 |
-
|
87 |
-
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
|
88 |
-
|
89 |
-
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
|
90 |
|
91 |
-
|
92 |
|
93 |
-
|
94 |
|
95 |
-
|
96 |
|
97 |
-
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
104 |
|
105 |
-
|
106 |
|
|
|
|
3 |
license: apache-2.0
|
4 |
pipeline_tag: text-generation
|
5 |
quantized_by: bartowski
|
6 |
+
lm_studio:
|
7 |
+
param_count: 7b
|
8 |
+
use_case: math
|
9 |
+
release_date: 16-07-2024
|
10 |
+
model_creator: mistralai
|
11 |
+
prompt_template: Mistral Instruct
|
12 |
+
base_model: mistral
|
13 |
+
original_repo: mistralai/mathstral-7B-v0.1
|
14 |
---
|
15 |
+
## ๐ซ Community Model> Mathstral 7b v0.1 by Mistral AI
|
16 |
|
17 |
+
*๐พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
|
18 |
|
19 |
+
**Model creator:** [Microsoft](https://huggingface.co/microsoft)<br>
|
20 |
+
**Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)<br>
|
21 |
+
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3278](https://github.com/ggerganov/llama.cpp/releases/tag/b3278)<br>
|
22 |
|
23 |
+
## Model Summary:
|
24 |
+
Mathstral is a model based on the Mistral family fine tuned specifically for advanced mathematical problems requiring complex, multi-step logical reasoning.<br>
|
25 |
+
This model achieves state of the art reasoning compared to similarly sized models across a range of industry standard benchmarks.
|
26 |
|
27 |
+
## Prompt template:
|
28 |
|
29 |
+
Choose the `Mistral Instruct` preset in your LM Studio.
|
30 |
|
31 |
+
Under the hood, the model will see a prompt that's formatted like so:
|
32 |
|
33 |
```
|
34 |
+
<s>[INST] {prompt} [/INST]</s>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
```
|
36 |
|
37 |
+
## Technical Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
+
Released with collaboration with [Project Numina](https://projectnumina.ai/).
|
40 |
|
41 |
+
This model achieves 56.6% on MATH and 63.47% on MMLU.
|
42 |
|
43 |
+
Context length: 32768
|
44 |
|
45 |
+
For more details, check out the Mistral blog post [here](https://mistral.ai/news/mathstral/)
|
46 |
|
47 |
+
## Special thanks
|
48 |
|
49 |
+
๐ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
50 |
|
51 |
+
๐ Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size!
|
52 |
|
53 |
+
## Disclaimers
|
54 |
|
55 |
+
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|