bartowski commited on
Commit
f142a65
1 Parent(s): 855bd1c

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ AlphaHitchhiker-7B-v2-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ AlphaHitchhiker-7B-v2-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ AlphaHitchhiker-7B-v2-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ AlphaHitchhiker-7B-v2-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ AlphaHitchhiker-7B-v2-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ AlphaHitchhiker-7B-v2-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ AlphaHitchhiker-7B-v2-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ AlphaHitchhiker-7B-v2-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ AlphaHitchhiker-7B-v2-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ AlphaHitchhiker-7B-v2-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ AlphaHitchhiker-7B-v2-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ AlphaHitchhiker-7B-v2-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
AlphaHitchhiker-7B-v2-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3543dc60cbb4adc787418092bab93dedee7b4990ef10c63fb035691a65f6279c
3
+ size 2719242208
AlphaHitchhiker-7B-v2-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:146054f1b3cc6327eeccffc4632254834c7c8cb982e7d9b768cd2a59a62969b2
3
+ size 3822024672
AlphaHitchhiker-7B-v2-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:160bb90c53f9dcb0483d7f8e770ab44ca18c403e1f60c41764218fe34843e94a
3
+ size 3518986208
AlphaHitchhiker-7B-v2-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7261f4aad401838ddd06e30d6817b14fb9f958e47131b6ba3b1ae345a442e73
3
+ size 3164567520
AlphaHitchhiker-7B-v2-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1ba38d4b997afefe65e242db9438938270f86eff3622cfc4ed09b5949ce35c2b
3
+ size 4108916704
AlphaHitchhiker-7B-v2-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:625f905ffd47cfe5eb34534d987be295c212dff9fb1450628e6baad0a6aaf925
3
+ size 4368439264
AlphaHitchhiker-7B-v2-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:460578ddc9958ef1002a1ef80200cfd05f7509fa5dfa98f40293e25b4cdf159f
3
+ size 4140373984
AlphaHitchhiker-7B-v2-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ee823260c25371b25a5efac536e24114c354fadfb9e35b372a05ad743903fb0
3
+ size 4997715936
AlphaHitchhiker-7B-v2-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2638bade2428f5dedba1084cc822526b6fd36e17e555c7c8277dd8d4d71e04e6
3
+ size 5131409376
AlphaHitchhiker-7B-v2-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3e63f5e3ddf79bddf0023483bab9abd7b226fb0c77d4f66146377013b6e3697
3
+ size 4997715936
AlphaHitchhiker-7B-v2-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df5c2875746cae5d24f4d99dc4928a5aced2373c5b58fb653f69a39981b5570b
3
+ size 5942065120
AlphaHitchhiker-7B-v2-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:143f241459181933c04be3f9a2138f1c45008459ec52d205e06ea7a512186cee
3
+ size 7695857632
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - text-generation-inference
7
+ - transformers
8
+ - unsloth
9
+ - mistral
10
+ - trl
11
+ - sft
12
+ base_model: mlabonne/AlphaMonarch-7B
13
+ quantized_by: bartowski
14
+ pipeline_tag: text-generation
15
+ ---
16
+
17
+ ## Llamacpp Quantizations of AlphaHitchhiker-7B-v2
18
+
19
+ Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2405">b2405</a> for quantization.
20
+
21
+ Original model: https://huggingface.co/macadeliccc/AlphaHitchhiker-7B-v2
22
+
23
+ Download a file (not the whole branch) from below:
24
+
25
+ | Filename | Quant type | File Size | Description |
26
+ | -------- | ---------- | --------- | ----------- |
27
+ | [AlphaHitchhiker-7B-v2-Q8_0.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q8_0.gguf) | Q8_0 | 7.69GB | Extremely high quality, generally unneeded but max available quant. |
28
+ | [AlphaHitchhiker-7B-v2-Q6_K.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q6_K.gguf) | Q6_K | 5.94GB | Very high quality, near perfect, *recommended*. |
29
+ | [AlphaHitchhiker-7B-v2-Q5_K_M.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q5_K_M.gguf) | Q5_K_M | 5.13GB | High quality, very usable. |
30
+ | [AlphaHitchhiker-7B-v2-Q5_K_S.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q5_K_S.gguf) | Q5_K_S | 4.99GB | High quality, very usable. |
31
+ | [AlphaHitchhiker-7B-v2-Q5_0.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q5_0.gguf) | Q5_0 | 4.99GB | High quality, older format, generally not recommended. |
32
+ | [AlphaHitchhiker-7B-v2-Q4_K_M.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q4_K_M.gguf) | Q4_K_M | 4.36GB | Good quality, similar to 4.25 bpw. |
33
+ | [AlphaHitchhiker-7B-v2-Q4_K_S.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q4_K_S.gguf) | Q4_K_S | 4.14GB | Slightly lower quality with small space savings. |
34
+ | [AlphaHitchhiker-7B-v2-Q4_0.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q4_0.gguf) | Q4_0 | 4.10GB | Decent quality, older format, generally not recommended. |
35
+ | [AlphaHitchhiker-7B-v2-Q3_K_L.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q3_K_L.gguf) | Q3_K_L | 3.82GB | Lower quality but usable, good for low RAM availability. |
36
+ | [AlphaHitchhiker-7B-v2-Q3_K_M.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q3_K_M.gguf) | Q3_K_M | 3.51GB | Even lower quality. |
37
+ | [AlphaHitchhiker-7B-v2-Q3_K_S.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q3_K_S.gguf) | Q3_K_S | 3.16GB | Low quality, not recommended. |
38
+ | [AlphaHitchhiker-7B-v2-Q2_K.gguf](https://huggingface.co/bartowski/AlphaHitchhiker-7B-v2-GGUF//main/AlphaHitchhiker-7B-v2-Q2_K.gguf) | Q2_K | 2.71GB | Extremely low quality, *not* recommended.
39
+
40
+ Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski