InferenceIllusionist commited on
Commit
267b185
1 Parent(s): d32fd3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -40,7 +40,9 @@ Please note importance matrix quantizations are a work in progress, IQ3 and abov
40
  <b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed from the table below. You may need to pad this further depending on if you are running image gen or TTS as well.
41
  | Quant | Size (GB) | Comments |
42
  |:-----|--------:|:------|
43
- | [IQ2_S](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_S.gguf?download=true) | 14.1 | |
 
 
44
  | [IQ2_M](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_M.gguf?download=true) | 15.5 | |
45
  | [IQ3_XXS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ3_XXS.gguf?download=true) | 18.2 | Better response quality than IQ2|
46
  | [IQ3_XS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ3_XS.gguf?download=true) | 19.3 | |
 
40
  <b>Tip:</b> Pick a size that can fit in your GPU while still allowing some room for context for best speed from the table below. You may need to pad this further depending on if you are running image gen or TTS as well.
41
  | Quant | Size (GB) | Comments |
42
  |:-----|--------:|:------|
43
+ | [IQ2_XXS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_XXS.gguf?download=true) | 12.6 | |
44
+ | [IQ2_XS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_XS.gguf?download=true) | 13.9 | |
45
+ | [IQ2_S](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_S.gguf?download=true) | 14.1 | Roughly the biggest quant that can fit fully offloaded to 16gb VRAM |
46
  | [IQ2_M](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ2_M.gguf?download=true) | 15.5 | |
47
  | [IQ3_XXS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ3_XXS.gguf?download=true) | 18.2 | Better response quality than IQ2|
48
  | [IQ3_XS](https://huggingface.co/Quant-Cartel/Neophanis-8x7B-iMat-GGUF/resolve/main/Neophanis-8x7B-iMat-IQ3_XS.gguf?download=true) | 19.3 | |