mradermacher commited on
Commit
d46e80a
1 Parent(s): 7659447

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,8 +11,9 @@ extra_gated_prompt: |-
11
  Access to this model requires reading and agreeing to the following agreement [here](https://github.com/Nanbeige/Nanbeige/blob/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf)
12
  language:
13
  - en
 
14
  library_name: transformers
15
- license: other
16
  quantized_by: mradermacher
17
  tags:
18
  - llm
@@ -46,7 +47,7 @@ more details, including on how to concatenate multi-part files.
46
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
47
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
48
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_0.gguf) | Q4_0 | 4.8 | fast, low quality |
49
- | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.IQ4_NL.gguf) | IQ4_NL | 4.8 | slightly worse than Q4_K_S |
50
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
51
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
52
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
@@ -54,7 +55,6 @@ more details, including on how to concatenate multi-part files.
54
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
55
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
56
 
57
-
58
  Here is a handy graph by ikawrakow comparing some lower-quality quant
59
  types (lower is better):
60
 
 
11
  Access to this model requires reading and agreeing to the following agreement [here](https://github.com/Nanbeige/Nanbeige/blob/main/License_Agreement_for_Large_Language_Models_Nanbeige.pdf)
12
  language:
13
  - en
14
+ - zh
15
  library_name: transformers
16
+ license: apache-2.0
17
  quantized_by: mradermacher
18
  tags:
19
  - llm
 
47
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q3_K_L.gguf) | Q3_K_L | 4.6 | |
48
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
49
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_0.gguf) | Q4_0 | 4.8 | fast, low quality |
50
+ | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.IQ4_NL.gguf) | IQ4_NL | 4.8 | prefer IQ4_XS |
51
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
52
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
53
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
 
55
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
56
  | [GGUF](https://huggingface.co/mradermacher/Nanbeige1.5-8B-Chat-GGUF/resolve/main/Nanbeige1.5-8B-Chat.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
57
 
 
58
  Here is a handy graph by ikawrakow comparing some lower-quality quant
59
  types (lower is better):
60