bartowski commited on
Commit
edaf344
1 Parent(s): 5f3ca60

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,16 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ OpenHercules-2.5-Mistral-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ OpenHercules-2.5-Mistral-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ OpenHercules-2.5-Mistral-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ OpenHercules-2.5-Mistral-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ OpenHercules-2.5-Mistral-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ OpenHercules-2.5-Mistral-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ OpenHercules-2.5-Mistral-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ OpenHercules-2.5-Mistral-7B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ OpenHercules-2.5-Mistral-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ OpenHercules-2.5-Mistral-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ OpenHercules-2.5-Mistral-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ OpenHercules-2.5-Mistral-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
48
+ OpenHercules-2.5-Mistral-7B-fp16.gguf filter=lfs diff=lfs merge=lfs -text
OpenHercules-2.5-Mistral-7B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4637bb01cb77f4d8c1532c25a217cb8db6bd19e0381e2730784fbab960d8249
3
+ size 2719241984
OpenHercules-2.5-Mistral-7B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3530cd73a1c6d0a35853a6a7c5ae4aad5652542133d6629d7f43310067fb8b16
3
+ size 3822024448
OpenHercules-2.5-Mistral-7B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:084c82130a61826935df67d6b6fa4835a7649be4dd46ed3ce5936c8e328e1817
3
+ size 3518985984
OpenHercules-2.5-Mistral-7B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09c30fc649a48f1458e329130a4f8def70740edefa650f78bd4e6c515e869ac7
3
+ size 3164567296
OpenHercules-2.5-Mistral-7B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87d55fad73c24737f271ded82f2803316b5ff5327d6396065f2c9cf01216ab4b
3
+ size 4108916480
OpenHercules-2.5-Mistral-7B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8335e2741ee18e7e1b6d822508e7292e22a12d7615275a228c5c3e1fc9a99813
3
+ size 4368439040
OpenHercules-2.5-Mistral-7B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d61bc33c7da700076d220779b5d7019b150bca3ee3e053639e68151177f8e91
3
+ size 4140373760
OpenHercules-2.5-Mistral-7B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de25da3989c823a4b9fce48dea6242d6c99b2b6566c2c62f80cc23c9f8bbcdc3
3
+ size 4997715712
OpenHercules-2.5-Mistral-7B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5ece5d4f8d7fff6ec8355760a70ccbbf46c038a47a988ff5c22c6d5913b4d07c
3
+ size 5131409152
OpenHercules-2.5-Mistral-7B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b29bc19fca62c0c9d5d9d98bd6bcd43112dd340bb9b7a6e23c345733c285acd9
3
+ size 4997715712
OpenHercules-2.5-Mistral-7B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f69c632b7e5a7e9834e638894afedc028319f9a075cf75ba815aebf9bb683bb
3
+ size 5942064896
OpenHercules-2.5-Mistral-7B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f2662b648ddefa7628ede3ecadb30c53e8f662fceccf08568d83bfe6c8d4f00
3
+ size 7695857408
OpenHercules-2.5-Mistral-7B-fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f89caf7684673d7870b316cdc9a8bfc306166aa8fa1bf322e49038dcd7f69ed3
3
+ size 14484731584
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ - mergekit
5
+ - lazymergekit
6
+ - Locutusque/Hercules-2.5-Mistral-7B
7
+ - teknium/OpenHermes-2.5-Mistral-7B
8
+ base_model:
9
+ - Locutusque/Hercules-2.5-Mistral-7B
10
+ - teknium/OpenHermes-2.5-Mistral-7B
11
+ quantized_by: bartowski
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ ## Llamacpp Quantizations of OpenHercules-2.5-Mistral-7B
16
+
17
+ Using <a href="https://github.com/ggerganov/llama.cpp/commit/fa974646e1a2024fc7dc9e6f27cf1f2f5d4a3763">llama.cpp commit fa97464</a> for quantization.
18
+
19
+ Original model: https://huggingface.co/Locutusque/OpenHercules-2.5-Mistral-7B
20
+
21
+ Download a file (not the whole branch) from below:
22
+
23
+ | Filename | Quant type | File Size | Description |
24
+ | -------- | ---------- | --------- | ----------- |
25
+ | [OpenHercules-2.5-Mistral-7B-Q8_0.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
26
+ | [OpenHercules-2.5-Mistral-7B-Q6_K.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
27
+ | [OpenHercules-2.5-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
28
+ | [OpenHercules-2.5-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
29
+ | [OpenHercules-2.5-Mistral-7B-Q5_0.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
30
+ | [OpenHercules-2.5-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
31
+ | [OpenHercules-2.5-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
32
+ | [OpenHercules-2.5-Mistral-7B-Q4_0.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
33
+ | [OpenHercules-2.5-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
34
+ | [OpenHercules-2.5-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
35
+ | [OpenHercules-2.5-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
36
+ | [OpenHercules-2.5-Mistral-7B-Q2_K.gguf](https://huggingface.co/bartowski/OpenHercules-2.5-Mistral-7B-GGUF/blob/main/OpenHercules-2.5-Mistral-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
37
+
38
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski