bartowski commited on
Commit
37c653f
โ€ข
1 Parent(s): 8e5e825

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +30 -81
README.md CHANGED
@@ -3,104 +3,53 @@ base_model: mistralai/mathstral-7B-v0.1
3
  license: apache-2.0
4
  pipeline_tag: text-generation
5
  quantized_by: bartowski
 
 
 
 
 
 
 
 
6
  ---
 
7
 
8
- ## Llamacpp imatrix Quantizations of mathstral-7B-v0.1
9
 
10
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b3389">b3389</a> for quantization.
 
 
11
 
12
- Original model: https://huggingface.co/mistralai/mathstral-7B-v0.1
 
 
13
 
14
- All quants made using imatrix option with dataset from [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)
15
 
16
- ## Prompt format
17
 
18
- No chat template specified so default is used. This may be incorrect, check original model card for details.
19
 
20
  ```
21
- <s>[INST] <<SYS>>
22
- {system_prompt}
23
- <</SYS>>
24
-
25
- {prompt} [/INST] </s>
26
- ```
27
-
28
- ## Download a file (not the whole branch) from below:
29
-
30
- | Filename | Quant type | File Size | Split | Description |
31
- | -------- | ---------- | --------- | ----- | ----------- |
32
- | [mathstral-7B-v0.1-f32.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-f32.gguf) | f32 | 28.99GB | false | Full F32 weights. |
33
- | [mathstral-7B-v0.1-Q8_0.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q8_0.gguf) | Q8_0 | 7.70GB | false | Extremely high quality, generally unneeded but max available quant. |
34
- | [mathstral-7B-v0.1-Q6_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q6_K_L.gguf) | Q6_K_L | 6.01GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
35
- | [mathstral-7B-v0.1-Q6_K.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q6_K.gguf) | Q6_K | 5.95GB | false | Very high quality, near perfect, *recommended*. |
36
- | [mathstral-7B-v0.1-Q5_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_L.gguf) | Q5_K_L | 5.22GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
37
- | [mathstral-7B-v0.1-Q5_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_M.gguf) | Q5_K_M | 5.14GB | false | High quality, *recommended*. |
38
- | [mathstral-7B-v0.1-Q5_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q5_K_S.gguf) | Q5_K_S | 5.00GB | false | High quality, *recommended*. |
39
- | [mathstral-7B-v0.1-Q4_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_L.gguf) | Q4_K_L | 4.47GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
40
- | [mathstral-7B-v0.1-Q4_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_M.gguf) | Q4_K_M | 4.37GB | false | Good quality, default size for must use cases, *recommended*. |
41
- | [mathstral-7B-v0.1-Q4_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q4_K_S.gguf) | Q4_K_S | 4.14GB | false | Slightly lower quality with more space savings, *recommended*. |
42
- | [mathstral-7B-v0.1-Q3_K_XL.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_XL.gguf) | Q3_K_XL | 3.94GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
43
- | [mathstral-7B-v0.1-IQ4_XS.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ4_XS.gguf) | IQ4_XS | 3.91GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
44
- | [mathstral-7B-v0.1-Q3_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_L.gguf) | Q3_K_L | 3.83GB | false | Lower quality but usable, good for low RAM availability. |
45
- | [mathstral-7B-v0.1-Q3_K_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_M.gguf) | Q3_K_M | 3.52GB | false | Low quality. |
46
- | [mathstral-7B-v0.1-IQ3_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ3_M.gguf) | IQ3_M | 3.29GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
47
- | [mathstral-7B-v0.1-Q3_K_S.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q3_K_S.gguf) | Q3_K_S | 3.17GB | false | Low quality, not recommended. |
48
- | [mathstral-7B-v0.1-IQ3_XS.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ3_XS.gguf) | IQ3_XS | 3.02GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
49
- | [mathstral-7B-v0.1-Q2_K_L.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q2_K_L.gguf) | Q2_K_L | 2.85GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
50
- | [mathstral-7B-v0.1-Q2_K.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-Q2_K.gguf) | Q2_K | 2.72GB | false | Very low quality but surprisingly usable. |
51
- | [mathstral-7B-v0.1-IQ2_M.gguf](https://huggingface.co/bartowski/mathstral-7B-v0.1-GGUF/blob/main/mathstral-7B-v0.1-IQ2_M.gguf) | IQ2_M | 2.50GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
52
-
53
- ## Credits
54
-
55
- Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
56
-
57
- Thank you ZeroWw for the inspiration to experiment with embed/output
58
-
59
- ## Downloading using huggingface-cli
60
-
61
- First, make sure you have hugginface-cli installed:
62
-
63
- ```
64
- pip install -U "huggingface_hub[cli]"
65
- ```
66
-
67
- Then, you can target the specific file you want:
68
-
69
- ```
70
- huggingface-cli download bartowski/mathstral-7B-v0.1-GGUF --include "mathstral-7B-v0.1-Q4_K_M.gguf" --local-dir ./
71
  ```
72
 
73
- If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
74
-
75
- ```
76
- huggingface-cli download bartowski/mathstral-7B-v0.1-GGUF --include "mathstral-7B-v0.1-Q8_0.gguf/*" --local-dir mathstral-7B-v0.1-Q8_0
77
- ```
78
-
79
- You can either specify a new local-dir (mathstral-7B-v0.1-Q8_0) or download them all in place (./)
80
-
81
- ## Which file should I choose?
82
-
83
- A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
84
-
85
- The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
86
-
87
- If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
88
-
89
- If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
90
 
91
- Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
92
 
93
- If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
94
 
95
- If you want to get more into the weeds, you can check out this extremely useful feature chart:
96
 
97
- [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
98
 
99
- But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
100
 
101
- These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
102
 
103
- The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
104
 
105
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
106
 
 
 
3
  license: apache-2.0
4
  pipeline_tag: text-generation
5
  quantized_by: bartowski
6
+ lm_studio:
7
+ param_count: 7b
8
+ use_case: math
9
+ release_date: 16-07-2024
10
+ model_creator: mistralai
11
+ prompt_template: Mistral Instruct
12
+ base_model: mistral
13
+ original_repo: mistralai/mathstral-7B-v0.1
14
  ---
15
+ ## ๐Ÿ’ซ Community Model> Mathstral 7b v0.1 by Mistral AI
16
 
17
+ *๐Ÿ‘พ [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
18
 
19
+ **Model creator:** [Microsoft](https://huggingface.co/microsoft)<br>
20
+ **Original model**: [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)<br>
21
+ **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3278](https://github.com/ggerganov/llama.cpp/releases/tag/b3278)<br>
22
 
23
+ ## Model Summary:
24
+ Mathstral is a model based on the Mistral family fine tuned specifically for advanced mathematical problems requiring complex, multi-step logical reasoning.<br>
25
+ This model achieves state of the art reasoning compared to similarly sized models across a range of industry standard benchmarks.
26
 
27
+ ## Prompt template:
28
 
29
+ Choose the `Mistral Instruct` preset in your LM Studio.
30
 
31
+ Under the hood, the model will see a prompt that's formatted like so:
32
 
33
  ```
34
+ <s>[INST] {prompt} [/INST]</s>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  ```
36
 
37
+ ## Technical Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
+ Released with collaboration with [Project Numina](https://projectnumina.ai/).
40
 
41
+ This model achieves 56.6% on MATH and 63.47% on MMLU.
42
 
43
+ Context length: 32768
44
 
45
+ For more details, check out the Mistral blog post [here](https://mistral.ai/news/mathstral/)
46
 
47
+ ## Special thanks
48
 
49
+ ๐Ÿ™ Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
50
 
51
+ ๐Ÿ™ Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size!
52
 
53
+ ## Disclaimers
54
 
55
+ LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.