mradermacher commited on
Commit
24bc39f
1 Parent(s): 0c0add8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -10,9 +10,17 @@ tags:
10
  - mergekit
11
  - merge
12
  ---
13
- weighted/imatrix quants of https://huggingface.co/wolfram/miquliz-120b-v2.0
14
 
 
15
  <!-- provided-files -->
 
 
 
 
 
 
 
16
  ## Provided Quants
17
 
18
  | Link | Type | Size/GB | Notes |
@@ -30,4 +38,5 @@ weighted/imatrix quants of https://huggingface.co/wolfram/miquliz-120b-v2.0
30
  | [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q4_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q4_K_M.gguf.split-ab) | i1-Q4_K_M | 72.5 | fast, medium quality |
31
  | [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 85.3 | best weighted quant |
32
 
 
33
  <!-- end -->
 
10
  - mergekit
11
  - merge
12
  ---
13
+ ## About
14
 
15
+ weighted/imatrix quants of https://huggingface.co/wolfram/miquliz-120b-v2.0
16
  <!-- provided-files -->
17
+
18
+ ## Usage
19
+
20
+ If you are unsure how to use GGUF files, refer to one of [TheBloke's
21
+ READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
22
+ more details, including on how to concatenate multi-part files.
23
+
24
  ## Provided Quants
25
 
26
  | Link | Type | Size/GB | Notes |
 
38
  | [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q4_K_M.gguf.split-aa) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q4_K_M.gguf.split-ab) | i1-Q4_K_M | 72.5 | fast, medium quality |
39
  | [PART 1](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q5_K_M.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/miquliz-120b-v2.0-i1-GGUF/resolve/main/miquliz-120b-v2.0.i1-Q5_K_M.gguf.part2of2) | i1-Q5_K_M | 85.3 | best weighted quant |
40
 
41
+
42
  <!-- end -->