File size: 7,007 Bytes
7c2e490 b70336a 7c2e490 b70336a 7c2e490 f6485c3 7c2e490 799cae2 7c2e490 285b970 c73ece0 d6f0910 7c2e490 f89f989 799cae2 7c2e490 d0b2af5 2d0d06c 285b970 0f62984 54da4ef c73ece0 66fcd89 228b5c1 b70336a d6f0910 4ef07ff 8263575 b70336a 7c2e490 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
---
base_model: IEITYuan/Yuan2-M32-hf
inference: false
library_name: gguf
license: apache-2.0
pipeline_tag: text-generation
quantized_by: legraphista
tags:
- quantized
- GGUF
- imatrix
- quantization
- imat
- imatrix
- static
- 16bit
- 8bit
- 6bit
- 5bit
- 4bit
- 3bit
- 2bit
- 1bit
---
# Yuan2-M32-hf-IMat-GGUF
_Llama.cpp imatrix quantization of IEITYuan/Yuan2-M32-hf_
Original Model: [IEITYuan/Yuan2-M32-hf](https://huggingface.co/IEITYuan/Yuan2-M32-hf)
Original dtype: `BF16` (`bfloat16`)
Quantized by: [https://github.com/chong000/3rd_party/tree/main](https://github.com/chong000/3rd_party/tree/main)
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw)
- [Files](#files)
- [IMatrix](#imatrix)
- [Common Quants](#common-quants)
- [All Quants](#all-quants)
- [Downloading using huggingface-cli](#downloading-using-huggingface-cli)
- [Inference](#inference)
- [Llama.cpp](#llama-cpp)
- [FAQ](#faq)
- [Why is the IMatrix not applied everywhere?](#why-is-the-imatrix-not-applied-everywhere)
- [How do I merge a split GGUF?](#how-do-i-merge-a-split-gguf)
---
## Files
### IMatrix
Status: β
Available
Link: [here](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/imatrix.dat)
### Common Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yuan2-M32-hf.Q8_0.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q8_0.gguf) | Q8_0 | 42.93GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q6_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q6_K.gguf) | Q6_K | 33.23GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q4_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q4_K.gguf) | Q4_K | 24.68GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q3_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q3_K.gguf) | Q3_K | 19.54GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q2_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q2_K.gguf) | Q2_K | 15.02GB | β
Available | π’ IMatrix | π¦ No
### All Quants
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
| -------- | ---------- | --------- | ------ | ------------ | -------- |
| [Yuan2-M32-hf.FP16/*](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/tree/main/Yuan2-M32-hf.FP16) | F16 | 80.12GB | β
Available | βͺ Static | β Yes
| [Yuan2-M32-hf.Q8_0.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q8_0.gguf) | Q8_0 | 42.93GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q6_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q6_K.gguf) | Q6_K | 33.23GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q5_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q5_K.gguf) | Q5_K | 28.82GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q5_K_S.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q5_K_S.gguf) | Q5_K_S | 27.96GB | β
Available | βͺ Static | π¦ No
| [Yuan2-M32-hf.Q4_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q4_K.gguf) | Q4_K | 24.68GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q4_K_S.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q4_K_S.gguf) | Q4_K_S | 23.19GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.IQ4_NL.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.IQ4_NL.gguf) | IQ4_NL | 22.99GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q3_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q3_K.gguf) | Q3_K | 19.54GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q3_K_L.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q3_K_L.gguf) | Q3_K_L | 21.14GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q3_K_S.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q3_K_S.gguf) | Q3_K_S | 17.71GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.IQ3_XXS.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.IQ3_XXS.gguf) | IQ3_XXS | 15.91GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q2_K.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q2_K.gguf) | Q2_K | 15.02GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.Q2_K_S.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.Q2_K_S.gguf) | Q2_K_S | 14.05GB | β
Available | π’ IMatrix | π¦ No
| [Yuan2-M32-hf.IQ2_XS.gguf](https://huggingface.co/legraphista/Yuan2-M32-hf-IMat-GGUF/blob/main/Yuan2-M32-hf.IQ2_XS.gguf) | IQ2_XS | 12.21GB | β
Available | π’ IMatrix | π¦ No
| Yuan2-M32-hf.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ IMatrix | -
| Yuan2-M32-hf.IQ1_S | IQ1_S | - | β³ Processing | π’ IMatrix | -
## Downloading using huggingface-cli
If you do not have hugginface-cli installed:
```
pip install -U "huggingface_hub[cli]"
```
Download the specific file you want:
```
huggingface-cli download legraphista/Yuan2-M32-hf-IMat-GGUF --include "Yuan2-M32-hf.Q8_0.gguf" --local-dir ./
```
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
```
huggingface-cli download legraphista/Yuan2-M32-hf-IMat-GGUF --include "Yuan2-M32-hf.Q8_0/*" --local-dir ./
# see FAQ for merging GGUF's
```
---
## Inference
### Llama.cpp
```
llama.cpp/main -m Yuan2-M32-hf.Q8_0.gguf --color -i -p "prompt here"
```
---
## FAQ
### Why is the IMatrix not applied everywhere?
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
### How do I merge a split GGUF?
1. Make sure you have `gguf-split` available
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases
- Download the appropriate zip for your system from the latest release
- Unzip the archive and you should be able to find `gguf-split`
2. Locate your GGUF chunks folder (ex: `Yuan2-M32-hf.Q8_0`)
3. Run `gguf-split --merge Yuan2-M32-hf.Q8_0/Yuan2-M32-hf.Q8_0-00001-of-XXXXX.gguf Yuan2-M32-hf.Q8_0.gguf`
- Make sure to point `gguf-split` to the first chunk of the split.
---
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |