mradermacher commited on
Commit
4ace6b0
1 Parent(s): eecf375

auto-patch README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -3
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  language:
3
  - en
4
  library_name: transformers
@@ -9,6 +10,7 @@ quantized_by: mradermacher
9
  static quants of https://huggingface.co/OnlyThings/Only-70B-v0.6-laserSNR
10
 
11
  <!-- provided-files -->
 
12
  ## Usage
13
 
14
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
@@ -26,13 +28,14 @@ more details, including on how to concatenate multi-part files.
26
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_S.gguf) | Q3_K_S | 30.3 | |
27
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality |
28
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_L.gguf) | Q3_K_L | 36.6 | |
29
- | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.IQ4_NL.gguf) | IQ4_NL | 39.7 | fast, slightly worse than Q4_K_S |
30
- | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, medium quality |
31
- | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, medium quality |
32
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q5_K_S.gguf) | Q5_K_S | 47.9 | |
33
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q5_K_M.gguf) | Q5_K_M | 49.2 | |
34
  | [PART 1](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality |
35
  | [PART 1](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
 
36
 
37
 
38
  Here is a handy graph by ikawrakow comparing some lower-quality quant
@@ -43,4 +46,10 @@ types (lower is better):
43
  And here are Artefact2's thoughts on the matter:
44
  https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
45
 
 
 
 
 
 
 
46
  <!-- end -->
 
1
  ---
2
+ exported_from: OnlyThings/Only-70B-v0.6-laserSNR
3
  language:
4
  - en
5
  library_name: transformers
 
10
  static quants of https://huggingface.co/OnlyThings/Only-70B-v0.6-laserSNR
11
 
12
  <!-- provided-files -->
13
+ weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
14
  ## Usage
15
 
16
  If you are unsure how to use GGUF files, refer to one of [TheBloke's
 
28
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_S.gguf) | Q3_K_S | 30.3 | |
29
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_M.gguf) | Q3_K_M | 33.7 | lower quality |
30
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q3_K_L.gguf) | Q3_K_L | 36.6 | |
31
+ | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.IQ4_NL.gguf) | IQ4_NL | 39.7 | slightly worse than Q4_K_S |
32
+ | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q4_K_S.gguf) | Q4_K_S | 39.7 | fast, recommended |
33
+ | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q4_K_M.gguf) | Q4_K_M | 41.8 | fast, recommended |
34
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q5_K_S.gguf) | Q5_K_S | 47.9 | |
35
  | [GGUF](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q5_K_M.gguf) | Q5_K_M | 49.2 | |
36
  | [PART 1](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q6_K.gguf.part2of2) | Q6_K | 57.0 | very good quality |
37
  | [PART 1](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.Q8_0.gguf.part2of2) | Q8_0 | 73.6 | fast, best quality |
38
+ | [PART 1](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.SOURCE.gguf.part1of3) [PART 2](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.SOURCE.gguf.part2of3) [PART 3](https://huggingface.co/mradermacher/Only-70B-v0.6-laserSNR-GGUF/resolve/main/Only-70B-v0.6-laserSNR.SOURCE.gguf.part3of3) | SOURCE | 138.1 | source gguf, only provided when it was hard to come by |
39
 
40
 
41
  Here is a handy graph by ikawrakow comparing some lower-quality quant
 
46
  And here are Artefact2's thoughts on the matter:
47
  https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
48
 
49
+ ## Thanks
50
+
51
+ I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
52
+ me use its servers and providing upgrades to my workstation to enable
53
+ this work in my free time.
54
+
55
  <!-- end -->