bartowski commited on
Commit
c47faab
1 Parent(s): 96bcb1a

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,19 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistral-7B-v0.2-OpenHermes-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistral-7B-v0.2-OpenHermes-IQ3_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistral-7B-v0.2-OpenHermes-IQ4_NL.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistral-7B-v0.2-OpenHermes-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistral-7B-v0.2-OpenHermes-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistral-7B-v0.2-OpenHermes-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Mistral-7B-v0.2-OpenHermes-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Mistral-7B-v0.2-OpenHermes-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Mistral-7B-v0.2-OpenHermes-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Mistral-7B-v0.2-OpenHermes-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Mistral-7B-v0.2-OpenHermes-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Mistral-7B-v0.2-OpenHermes-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
48
+ Mistral-7B-v0.2-OpenHermes-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
49
+ Mistral-7B-v0.2-OpenHermes-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
50
+ Mistral-7B-v0.2-OpenHermes-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
51
+ Mistral-7B-v0.2-OpenHermes-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Mistral-7B-v0.2-OpenHermes-IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4a2fa1be401faa6992f7f56b5d85a48c42277c92582d0b2fed5ffaf46535d2d
3
+ size 3284891840
Mistral-7B-v0.2-OpenHermes-IQ3_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e49c72f93f2ab3e8afb72091f115993aba58bf8d6c486631c1b263173a942c6
3
+ size 3182393536
Mistral-7B-v0.2-OpenHermes-IQ4_NL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:678864e82f2f399c9302e584930b117bafe097207b02fa50e6e41ab7ec3bba54
3
+ size 4155054272
Mistral-7B-v0.2-OpenHermes-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2b5377124d273f18c4310766221ead469cea476de3e3f92db964a4cf23f16dd
3
+ size 3944388800
Mistral-7B-v0.2-OpenHermes-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8c5547d01ecd10efbdd5151a1ae08f632cba4ac2180baff8ffe9bc7d4e5c3b
3
+ size 2719242432
Mistral-7B-v0.2-OpenHermes-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82ae6e5978e27d488f95c18f5a88c507389c6348c9678a826b7d83a8e400e22c
3
+ size 3822024896
Mistral-7B-v0.2-OpenHermes-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:645cb3f8a3df614a276d1e228fb5c11de84fc924e5aea116346b6b88c4b6d2c4
3
+ size 3518986432
Mistral-7B-v0.2-OpenHermes-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c289f254ae66e6a98d0297467f51de487693c9517372b6e7db75fb7550ca75de
3
+ size 3164567744
Mistral-7B-v0.2-OpenHermes-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fd64ff91283cfc3c3889e1d4989e62b78a9edb9ed0b1807b496196b19474ebf
3
+ size 4108916928
Mistral-7B-v0.2-OpenHermes-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4fe0ba1ecacdb0ef6e9a9d9d16337b51713a2d68c0c52a440e6f4051a686ec4
3
+ size 4368439488
Mistral-7B-v0.2-OpenHermes-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:68296d0632e6e7a3f267b624e97206e16a3a039117b8281f0e096eda9e3727ef
3
+ size 4140374208
Mistral-7B-v0.2-OpenHermes-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c1964888b6e4000de0a024da611588551b544c8b978284ed73bf8dfb09d07d6
3
+ size 4997716160
Mistral-7B-v0.2-OpenHermes-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b2d779c44d1cfa88ff6044c2fbab44a8f3ca45417778f6186a700509fe41693
3
+ size 5131409600
Mistral-7B-v0.2-OpenHermes-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67af3dee4257e290dd60f7b0ad980dd16648a24fee0603639b951e800386de3b
3
+ size 4997716160
Mistral-7B-v0.2-OpenHermes-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5a3487ba103c56d7a87c3f586aec24a2c5a337d89915f8d1385dab5b12112a12
3
+ size 5942065344
Mistral-7B-v0.2-OpenHermes-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9058396b0daa330e96660a4e9a44db27be423620e5aab124a8f7aaac03c74369
3
+ size 7695857856
README.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - unsloth
9
+ - mistral
10
+ - trl
11
+ - sft
12
+ base_model: alpindale/Mistral-7B-v0.2
13
+ quantized_by: bartowski
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ ## Llamacpp Quantizations of Mistral-7B-v0.2-OpenHermes
18
+
19
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2440">b2440</a> for quantization.
20
+
21
+ Original model: https://huggingface.co/macadeliccc/Mistral-7B-v0.2-OpenHermes
22
+
23
+ Download a file (not the whole branch) from below:
24
+
25
+ | Filename | Quant type | File Size | Description |
26
+ | -------- | ---------- | --------- | ----------- |
27
+ | [Mistral-7B-v0.2-OpenHermes-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
28
+ | [Mistral-7B-v0.2-OpenHermes-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
29
+ | [Mistral-7B-v0.2-OpenHermes-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
30
+ | [Mistral-7B-v0.2-OpenHermes-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
31
+ | [Mistral-7B-v0.2-OpenHermes-Q5_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
32
+ | [Mistral-7B-v0.2-OpenHermes-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
33
+ | [Mistral-7B-v0.2-OpenHermes-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
34
+ | [Mistral-7B-v0.2-OpenHermes-IQ4_NL.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-IQ4_NL.gguf) | IQ4_NL | 4.15GB | Good quality, similar to Q4_K_S, new method of quanting, |
35
+ | [Mistral-7B-v0.2-OpenHermes-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-IQ4_XS.gguf) | IQ4_XS | 3.94GB | Decent quality, new method with similar performance to Q4. |
36
+ | [Mistral-7B-v0.2-OpenHermes-Q4_0.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
37
+ | [Mistral-7B-v0.2-OpenHermes-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-IQ3_M.gguf) | IQ3_M | 3.28GB | Medium-low quality, new method with decent performance. |
38
+ | [Mistral-7B-v0.2-OpenHermes-IQ3_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-IQ3_S.gguf) | IQ3_S | 3.18GB | Lower quality, new method with decent performance, recommended over Q3 quants. |
39
+ | [Mistral-7B-v0.2-OpenHermes-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
40
+ | [Mistral-7B-v0.2-OpenHermes-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
41
+ | [Mistral-7B-v0.2-OpenHermes-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
42
+ | [Mistral-7B-v0.2-OpenHermes-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-7B-v0.2-OpenHermes-GGUF/blob/main/Mistral-7B-v0.2-OpenHermes-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
43
+
44
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski