Transformers
GGUF
English
Not-For-All-Audiences
Inference Endpoints
mradermacher commited on
Commit
d9a8b6a
1 Parent(s): 92c8442

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -51,7 +51,6 @@ more details, including on how to concatenate multi-part files.
51
  | [GGUF](https://huggingface.co/mradermacher/NEBULA-23B-v1.0-GGUF/resolve/main/NEBULA-23B-v1.0.Q6_K.gguf) | Q6_K | 19.6 | very good quality |
52
  | [GGUF](https://huggingface.co/mradermacher/NEBULA-23B-v1.0-GGUF/resolve/main/NEBULA-23B-v1.0.Q8_0.gguf) | Q8_0 | 25.4 | fast, best quality |
53
 
54
-
55
  Here is a handy graph by ikawrakow comparing some lower-quality quant
56
  types (lower is better):
57
 
 
51
  | [GGUF](https://huggingface.co/mradermacher/NEBULA-23B-v1.0-GGUF/resolve/main/NEBULA-23B-v1.0.Q6_K.gguf) | Q6_K | 19.6 | very good quality |
52
  | [GGUF](https://huggingface.co/mradermacher/NEBULA-23B-v1.0-GGUF/resolve/main/NEBULA-23B-v1.0.Q8_0.gguf) | Q8_0 | 25.4 | fast, best quality |
53
 
 
54
  Here is a handy graph by ikawrakow comparing some lower-quality quant
55
  types (lower is better):
56