Transformers
GGUF
English
Chinese
Inference Endpoints
mradermacher commited on
Commit
d7582ac
1 Parent(s): 224dce5

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -7,6 +7,7 @@ datasets:
7
  - jondurbin/truthy-dpo-v0.1
8
  language:
9
  - en
 
10
  library_name: transformers
11
  license: mit
12
  quantized_by: mradermacher
@@ -48,7 +49,6 @@ more details, including on how to concatenate multi-part files.
48
  | [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-DPO-GGUF/resolve/main/Faro-Yi-34B-DPO.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
49
  | [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-DPO-GGUF/resolve/main/Faro-Yi-34B-DPO.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
50
 
51
-
52
  Here is a handy graph by ikawrakow comparing some lower-quality quant
53
  types (lower is better):
54
 
 
7
  - jondurbin/truthy-dpo-v0.1
8
  language:
9
  - en
10
+ - zh
11
  library_name: transformers
12
  license: mit
13
  quantized_by: mradermacher
 
49
  | [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-DPO-GGUF/resolve/main/Faro-Yi-34B-DPO.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
50
  | [GGUF](https://huggingface.co/mradermacher/Faro-Yi-34B-DPO-GGUF/resolve/main/Faro-Yi-34B-DPO.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
51
 
 
52
  Here is a handy graph by ikawrakow comparing some lower-quality quant
53
  types (lower is better):
54