mradermacher commited on
Commit
9bc7646
1 Parent(s): 141ebee

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -210,7 +210,7 @@ tags:
210
  static quants of https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
211
 
212
  <!-- provided-files -->
213
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
214
  ## Usage
215
 
216
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -227,9 +227,11 @@ more details, including on how to concatenate multi-part files.
227
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
228
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
229
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
 
230
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
231
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
232
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
 
233
  | [PART 1](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
234
  | [PART 1](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
235
 
 
210
  static quants of https://huggingface.co/huihui-ai/Llama-3.3-70B-Instruct-abliterated-finetuned
211
 
212
  <!-- provided-files -->
213
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-i1-GGUF
214
  ## Usage
215
 
216
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
227
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_S.gguf) | Q3_K_S | 31.0 | |
228
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality |
229
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q3_K_L.gguf) | Q3_K_L | 37.2 | |
230
+ | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.IQ4_XS.gguf) | IQ4_XS | 38.4 | |
231
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended |
232
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended |
233
  | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q5_K_S.gguf) | Q5_K_S | 48.8 | |
234
+ | [GGUF](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q5_K_M.gguf) | Q5_K_M | 50.0 | |
235
  | [PART 1](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality |
236
  | [PART 1](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3.3-70B-Instruct-abliterated-finetuned-GGUF/resolve/main/Llama-3.3-70B-Instruct-abliterated-finetuned.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality |
237