mradermacher commited on
Commit
907b5f2
1 Parent(s): ff4a483

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -16,7 +16,7 @@ quantized_by: mradermacher
16
  static quants of https://huggingface.co/01-ai/Yi-1.5-34B-32K
17
 
18
  <!-- provided-files -->
19
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -29,8 +29,19 @@ more details, including on how to concatenate multi-part files.
29
 
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
 
 
 
32
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
 
 
 
 
33
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
 
 
 
 
34
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
35
 
36
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
16
  static quants of https://huggingface.co/01-ai/Yi-1.5-34B-32K
17
 
18
  <!-- provided-files -->
19
+ weighted/imatrix quants are available at https://huggingface.co/mradermacher/Yi-1.5-34B-32K-i1-GGUF
20
  ## Usage
21
 
22
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
29
 
30
  | Link | Type | Size/GB | Notes |
31
  |:-----|:-----|--------:|:------|
32
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q2_K.gguf) | Q2_K | 12.9 | |
33
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.IQ3_XS.gguf) | IQ3_XS | 14.3 | |
34
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q3_K_S.gguf) | Q3_K_S | 15.1 | |
35
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.IQ3_S.gguf) | IQ3_S | 15.1 | beats Q3_K* |
36
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.IQ3_M.gguf) | IQ3_M | 15.7 | |
37
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q3_K_M.gguf) | Q3_K_M | 16.8 | lower quality |
38
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q3_K_L.gguf) | Q3_K_L | 18.2 | |
39
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.IQ4_XS.gguf) | IQ4_XS | 18.7 | |
40
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q4_K_S.gguf) | Q4_K_S | 19.7 | fast, recommended |
41
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q4_K_M.gguf) | Q4_K_M | 20.8 | fast, recommended |
42
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q5_K_S.gguf) | Q5_K_S | 23.8 | |
43
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q5_K_M.gguf) | Q5_K_M | 24.4 | |
44
+ | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q6_K.gguf) | Q6_K | 28.3 | very good quality |
45
  | [GGUF](https://huggingface.co/mradermacher/Yi-1.5-34B-32K-GGUF/resolve/main/Yi-1.5-34B-32K.Q8_0.gguf) | Q8_0 | 36.6 | fast, best quality |
46
 
47
  Here is a handy graph by ikawrakow comparing some lower-quality quant