mradermacher commited on
Commit
dcfde25
·
verified ·
1 Parent(s): 3a088f4

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -4,9 +4,9 @@ language:
4
  - ko
5
  - en
6
  library_name: transformers
7
- quantized_by: mradermacher
8
  no_imatrix: '/ggml-quants.c:4453: GGML_ASSERT(besti1 >= 0 && besti2 >= 0 && best_k
9
  >= 0) failed'
 
10
  ---
11
  ## About
12
 
@@ -18,7 +18,6 @@ no_imatrix: '/ggml-quants.c:4453: GGML_ASSERT(besti1 >= 0 && besti2 >= 0 && best
18
  static quants of https://huggingface.co/moreh/Llama-3-Motif-102B
19
 
20
  <!-- provided-files -->
21
- weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
22
  ## Usage
23
 
24
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -35,6 +34,7 @@ more details, including on how to concatenate multi-part files.
35
  | [GGUF](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_S.gguf) | Q3_K_S | 44.4 | |
36
  | [GGUF](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_M.gguf) | Q3_K_M | 49.4 | lower quality |
37
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_L.gguf.part2of2) | Q3_K_L | 53.8 | |
 
38
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_S.gguf.part2of2) | Q4_K_S | 58.2 | fast, recommended |
39
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_M.gguf.part2of2) | Q4_K_M | 61.4 | fast, recommended |
40
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q5_K_S.gguf.part2of2) | Q5_K_S | 70.4 | |
 
4
  - ko
5
  - en
6
  library_name: transformers
 
7
  no_imatrix: '/ggml-quants.c:4453: GGML_ASSERT(besti1 >= 0 && besti2 >= 0 && best_k
8
  >= 0) failed'
9
+ quantized_by: mradermacher
10
  ---
11
  ## About
12
 
 
18
  static quants of https://huggingface.co/moreh/Llama-3-Motif-102B
19
 
20
  <!-- provided-files -->
 
21
  ## Usage
22
 
23
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
34
  | [GGUF](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_S.gguf) | Q3_K_S | 44.4 | |
35
  | [GGUF](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_M.gguf) | Q3_K_M | 49.4 | lower quality |
36
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_L.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q3_K_L.gguf.part2of2) | Q3_K_L | 53.8 | |
37
+ | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.IQ4_XS.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.IQ4_XS.gguf.part2of2) | IQ4_XS | 55.3 | |
38
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_S.gguf.part2of2) | Q4_K_S | 58.2 | fast, recommended |
39
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q4_K_M.gguf.part2of2) | Q4_K_M | 61.4 | fast, recommended |
40
  | [PART 1](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q5_K_S.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Llama-3-Motif-102B-GGUF/resolve/main/Llama-3-Motif-102B.Q5_K_S.gguf.part2of2) | Q5_K_S | 70.4 | |