bartowski commited on
Commit
8c06546
1 Parent(s): 61e362d

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Hyperion-2.1-Mistral-7B-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Hyperion-2.1-Mistral-7B-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Hyperion-2.1-Mistral-7B-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Hyperion-2.1-Mistral-7B-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Hyperion-2.1-Mistral-7B-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Hyperion-2.1-Mistral-7B-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Hyperion-2.1-Mistral-7B-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Hyperion-2.1-Mistral-7B-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Hyperion-2.1-Mistral-7B-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Hyperion-2.1-Mistral-7B-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Hyperion-2.1-Mistral-7B-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Hyperion-2.1-Mistral-7B-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Hyperion-2.1-Mistral-7B-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04179c0fce0303fa494f2743c3e60b5a1eee5c6cd8fda9f3156675fb0ba9d81f
3
+ size 2719241984
Hyperion-2.1-Mistral-7B-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e64231429b6f95fef87142a486a6b47b213e5e9a663b3b3b00b769f79ae4a9b4
3
+ size 3822024448
Hyperion-2.1-Mistral-7B-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0a40b0bfbf4aad60d7556b460ef299e508fbe3474e6af84687a5c81cda223b1
3
+ size 3518985984
Hyperion-2.1-Mistral-7B-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a94668a72aab071403d2910f7f7035c1ce5469bd128ce6772fdd3c3882f2525
3
+ size 3164567296
Hyperion-2.1-Mistral-7B-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb2eed5e97aa1f478550dd08f3406c2e28543c6709a7a82402c97668ab91c4b2
3
+ size 4108916480
Hyperion-2.1-Mistral-7B-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9f2d77414db13d4dc6e87e910cb7531db844ce245229fe00158cf2f8cb2eb6a
3
+ size 4368439040
Hyperion-2.1-Mistral-7B-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ed093819ac8e773f6cbad47982cc675c62dfcdb5d321d6e556d0992f1d7d444
3
+ size 4140373760
Hyperion-2.1-Mistral-7B-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c944443ac83b4e943165c1ce0b953401115cd72e01ea84f45c13ba05a95ec088
3
+ size 4997715712
Hyperion-2.1-Mistral-7B-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d874081481b7f2859474faf0981e29457e8eee733950e33e6dee2383a09c267c
3
+ size 5131409152
Hyperion-2.1-Mistral-7B-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5f361fa5ce4ad60e9ad2f8dc20f7c57d5f592ccbbbe974950b9106fabb12f8c
3
+ size 4997715712
Hyperion-2.1-Mistral-7B-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:905e7d02c6d6bf6407df740cc03fc71ac16a32ca10618d045b4878f46a1f9485
3
+ size 5942064896
Hyperion-2.1-Mistral-7B-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:023834f6567dcf2a0916db240afbafa3c2f54fa8ea5d686d1bb70b7ba17a4295
3
+ size 7695857408
README.md ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - Locutusque/hyperion-v2.0
6
+ language:
7
+ - en
8
+ quantized_by: bartowski
9
+ pipeline_tag: text-generation
10
+ ---
11
+
12
+ ## Llamacpp Quantizations of Hyperion-2.1-Mistral-7B
13
+
14
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2354">b2354</a> for quantization.
15
+
16
+ Original model: https://huggingface.co/Locutusque/Hyperion-2.1-Mistral-7B
17
+
18
+ Download a file (not the whole branch) from below:
19
+
20
+ | Filename | Quant type | File Size | Description |
21
+ | -------- | ---------- | --------- | ----------- |
22
+ | [Hyperion-2.1-Mistral-7B-Q8_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
23
+ | [Hyperion-2.1-Mistral-7B-Q6_K.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
24
+ | [Hyperion-2.1-Mistral-7B-Q5_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
25
+ | [Hyperion-2.1-Mistral-7B-Q5_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
26
+ | [Hyperion-2.1-Mistral-7B-Q5_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
27
+ | [Hyperion-2.1-Mistral-7B-Q4_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
28
+ | [Hyperion-2.1-Mistral-7B-Q4_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
29
+ | [Hyperion-2.1-Mistral-7B-Q4_0.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
30
+ | [Hyperion-2.1-Mistral-7B-Q3_K_L.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
31
+ | [Hyperion-2.1-Mistral-7B-Q3_K_M.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
32
+ | [Hyperion-2.1-Mistral-7B-Q3_K_S.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
33
+ | [Hyperion-2.1-Mistral-7B-Q2_K.gguf](https://huggingface.co/bartowski/Hyperion-2.1-Mistral-7B-GGUF//main/Hyperion-2.1-Mistral-7B-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
34
+
35
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski