|
--- |
|
base_model: CohereForAI/aya-23-35B |
|
inference: false |
|
language: |
|
- en |
|
- fr |
|
- de |
|
- es |
|
- it |
|
- pt |
|
- ja |
|
- ko |
|
- zh |
|
- ar |
|
- el |
|
- fa |
|
- pl |
|
- id |
|
- cs |
|
- he |
|
- hi |
|
- nl |
|
- ro |
|
- ru |
|
- tr |
|
- uk |
|
- vi |
|
library_name: gguf |
|
license: cc-by-nc-4.0 |
|
pipeline_tag: text-generation |
|
quantized_by: legraphista |
|
tags: |
|
- quantized |
|
- GGUF |
|
- imatrix |
|
- quantization |
|
--- |
|
|
|
# aya-23-35B-IMat-GGUF |
|
_Llama.cpp imatrix quantization of aya-23-35B-IMat-GGUF_ |
|
|
|
Original Model: [CohereForAI/aya-23-35B](https://huggingface.co/CohereForAI/aya-23-35B) |
|
Original dtype: `FP16` (`float16`) |
|
Quantized by: llama.cpp [b2998](https://github.com/ggerganov/llama.cpp/releases/tag/b2998) |
|
IMatrix dataset: [here](https://gist.githubusercontent.com/legraphista/d6d93f1a254bcfc58e0af3777eaec41e/raw/d380e7002cea4a51c33fffd47db851942754e7cc/imatrix.calibration.medium.raw) |
|
|
|
## Files |
|
|
|
### IMatrix |
|
Status: β³ Processing |
|
Link: [here](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/blob/main/imatrix.dat) |
|
|
|
### Common Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| [aya-23-35B.Q8_0/*](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/tree/main/aya-23-35B.Q8_0) | Q8_0 | 37.18GB | β
Available | βͺ No | β Yes |
|
| [aya-23-35B.Q6_K.gguf](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/blob/main/aya-23-35B.Q6_K.gguf) | Q6_K | 28.71GB | β
Available | βͺ No | π¦ No |
|
| aya-23-35B.Q4_K | Q4_K | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.Q3_K | Q3_K | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.Q2_K | Q2_K | - | β³ Processing | π’ Yes | - |
|
|
|
|
|
### All Quants |
|
| Filename | Quant type | File Size | Status | Uses IMatrix | Is Split | |
|
| -------- | ---------- | --------- | ------ | ------------ | -------- | |
|
| [aya-23-35B.FP16/*](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/tree/main/aya-23-35B.FP16) | F16 | 69.97GB | β
Available | βͺ No | β Yes |
|
| [aya-23-35B.Q5_K.gguf](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/blob/main/aya-23-35B.Q5_K.gguf) | Q5_K | 25.01GB | β
Available | βͺ No | π¦ No |
|
| [aya-23-35B.Q5_K_S.gguf](https://huggingface.co/legraphista/aya-23-35B-IMat-GGUF/blob/main/aya-23-35B.Q5_K_S.gguf) | Q5_K_S | 24.34GB | β
Available | βͺ No | π¦ No |
|
| aya-23-35B.Q4_K_S | Q4_K_S | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.Q3_K_L | Q3_K_L | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.Q3_K_S | Q3_K_S | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.Q2_K_S | Q2_K_S | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ4_NL | IQ4_NL | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ4_XS | IQ4_XS | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ3_M | IQ3_M | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ3_S | IQ3_S | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ3_XS | IQ3_XS | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ3_XXS | IQ3_XXS | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ2_M | IQ2_M | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ2_S | IQ2_S | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ2_XS | IQ2_XS | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ2_XXS | IQ2_XXS | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ1_M | IQ1_M | - | β³ Processing | π’ Yes | - |
|
| aya-23-35B.IQ1_S | IQ1_S | - | β³ Processing | π’ Yes | - |
|
|
|
|
|
## Downloading using huggingface-cli |
|
First, make sure you have hugginface-cli installed: |
|
``` |
|
pip install -U "huggingface_hub[cli]" |
|
``` |
|
Then, you can target the specific file you want: |
|
``` |
|
huggingface-cli download legraphista/aya-23-35B-IMat-GGUF --include "aya-23-35B.Q8_0.gguf" --local-dir ./ |
|
``` |
|
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run: |
|
``` |
|
huggingface-cli download legraphista/aya-23-35B-IMat-GGUF --include "aya-23-35B.Q8_0/*" --local-dir aya-23-35B.Q8_0 |
|
# see FAQ for merging GGUF's |
|
``` |
|
|
|
## FAQ |
|
|
|
### Why is the IMatrix not applied everywhere? |
|
According to [this investigation](https://www.reddit.com/r/LocalLLaMA/comments/1993iro/ggufs_quants_can_punch_above_their_weights_now/), it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results). |
|
|
|
### How do I merge a split GGUF? |
|
1. Make sure you have `gguf-split` available |
|
- To get hold of `gguf-split`, navigate to https://github.com/ggerganov/llama.cpp/releases |
|
- Download the appropriate zip for your system from the latest release |
|
- Unzip the archive and you should be able to find `gguf-split` |
|
2. Locate your GGUF chunks folder (ex: `aya-23-35B.Q8_0`) |
|
3. Run `gguf-split --merge aya-23-35B.Q8_0/aya-23-35B.Q8_0-00001-of-XXXXX.gguf aya-23-35B.Q8_0.gguf` |
|
- Make sure to point `gguf-split` to the first chunk of the split. |
|
|
|
--- |
|
|
|
Got a suggestion? Ping me [@legraphista](https://x.com/legraphista)! |