Transformers
GGUF
English
text generation
instruct
Inference Endpoints
mradermacher commited on
Commit
e20740c
1 Parent(s): 766d22a

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -52,7 +52,6 @@ more details, including on how to concatenate multi-part files.
52
  | [GGUF](https://huggingface.co/mradermacher/pygmalion-2-13b-GGUF/resolve/main/pygmalion-2-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
53
  | [GGUF](https://huggingface.co/mradermacher/pygmalion-2-13b-GGUF/resolve/main/pygmalion-2-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
54
 
55
-
56
  Here is a handy graph by ikawrakow comparing some lower-quality quant
57
  types (lower is better):
58
 
 
52
  | [GGUF](https://huggingface.co/mradermacher/pygmalion-2-13b-GGUF/resolve/main/pygmalion-2-13b.Q6_K.gguf) | Q6_K | 10.8 | very good quality |
53
  | [GGUF](https://huggingface.co/mradermacher/pygmalion-2-13b-GGUF/resolve/main/pygmalion-2-13b.Q8_0.gguf) | Q8_0 | 13.9 | fast, best quality |
54
 
 
55
  Here is a handy graph by ikawrakow comparing some lower-quality quant
56
  types (lower is better):
57