bartowski commited on
Commit
8afbeae
1 Parent(s): f53e048

Llamacpp quants

Browse files
.gitattributes CHANGED
@@ -35,3 +35,25 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  Smegmma-Deluxe-9B-v1-f32.gguf filter=lfs diff=lfs merge=lfs -text
37
  Smegmma-Deluxe-9B-v1.imatrix filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  Smegmma-Deluxe-9B-v1-f32.gguf filter=lfs diff=lfs merge=lfs -text
37
  Smegmma-Deluxe-9B-v1.imatrix filter=lfs diff=lfs merge=lfs -text
38
+ Smegmma-Deluxe-9B-v1-IQ2_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Smegmma-Deluxe-9B-v1-IQ2_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Smegmma-Deluxe-9B-v1-IQ2_XS.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Smegmma-Deluxe-9B-v1-IQ3_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Smegmma-Deluxe-9B-v1-IQ3_XS.gguf filter=lfs diff=lfs merge=lfs -text
43
+ Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf filter=lfs diff=lfs merge=lfs -text
44
+ Smegmma-Deluxe-9B-v1-IQ4_XS.gguf filter=lfs diff=lfs merge=lfs -text
45
+ Smegmma-Deluxe-9B-v1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
46
+ Smegmma-Deluxe-9B-v1-Q2_K_L.gguf filter=lfs diff=lfs merge=lfs -text
47
+ Smegmma-Deluxe-9B-v1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
48
+ Smegmma-Deluxe-9B-v1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
49
+ Smegmma-Deluxe-9B-v1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
50
+ Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf filter=lfs diff=lfs merge=lfs -text
51
+ Smegmma-Deluxe-9B-v1-Q4_K_L.gguf filter=lfs diff=lfs merge=lfs -text
52
+ Smegmma-Deluxe-9B-v1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
53
+ Smegmma-Deluxe-9B-v1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
54
+ Smegmma-Deluxe-9B-v1-Q5_K_L.gguf filter=lfs diff=lfs merge=lfs -text
55
+ Smegmma-Deluxe-9B-v1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
56
+ Smegmma-Deluxe-9B-v1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
57
+ Smegmma-Deluxe-9B-v1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
58
+ Smegmma-Deluxe-9B-v1-Q6_K_L.gguf filter=lfs diff=lfs merge=lfs -text
59
+ Smegmma-Deluxe-9B-v1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -33,28 +33,28 @@ Note that this model does not support a System prompt.
33
 
34
  | Filename | Quant type | File Size | Description |
35
  | -------- | ---------- | --------- | ----------- |
36
- | [Smegmma-Deluxe-9B-v1-Q8_0.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q8_0.gguf) | Q8_0 | | Extremely high quality, generally unneeded but max available quant. |
37
- | [Smegmma-Deluxe-9B-v1-Q6_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q6_K_L.gguf) | Q6_K_L | | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
38
- | [Smegmma-Deluxe-9B-v1-Q6_K.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q6_K.gguf) | Q6_K | | Very high quality, near perfect, *recommended*. |
39
- | [Smegmma-Deluxe-9B-v1-Q5_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q5_K_L.gguf) | Q5_K_L | | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
40
- | [Smegmma-Deluxe-9B-v1-Q5_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q5_K_M.gguf) | Q5_K_M | | High quality, *recommended*. |
41
- | [Smegmma-Deluxe-9B-v1-Q5_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q5_K_S.gguf) | Q5_K_S | | High quality, *recommended*. |
42
- | [Smegmma-Deluxe-9B-v1-Q4_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q4_K_L.gguf) | Q4_K_L | | Uses Q8_0 for embed and output weights. Good quality, uses about 4.83 bits per weight, *recommended*. |
43
- | [Smegmma-Deluxe-9B-v1-Q4_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q4_K_M.gguf) | Q4_K_M | | Good quality, uses about 4.83 bits per weight, *recommended*. |
44
- | [Smegmma-Deluxe-9B-v1-Q4_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q4_K_S.gguf) | Q4_K_S | | Slightly lower quality with more space savings, *recommended*. |
45
- | [Smegmma-Deluxe-9B-v1-IQ4_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ4_XS.gguf) | IQ4_XS | | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
46
- | [Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf) | Q3_K_XL | | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
47
- | [Smegmma-Deluxe-9B-v1-Q3_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q3_K_L.gguf) | Q3_K_L | | Lower quality but usable, good for low RAM availability. |
48
- | [Smegmma-Deluxe-9B-v1-Q3_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q3_K_M.gguf) | Q3_K_M | | Even lower quality. |
49
- | [Smegmma-Deluxe-9B-v1-IQ3_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ3_M.gguf) | IQ3_M | | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
50
- | [Smegmma-Deluxe-9B-v1-Q3_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q3_K_S.gguf) | Q3_K_S | | Low quality, not recommended. |
51
- | [Smegmma-Deluxe-9B-v1-IQ3_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ3_XS.gguf) | IQ3_XS | | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
52
- | [Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf) | IQ3_XXS | | Lower quality, new method with decent performance, comparable to Q3 quants. |
53
- | [Smegmma-Deluxe-9B-v1-Q2_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q2_K_L.gguf) | Q2_K_L | | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
54
- | [Smegmma-Deluxe-9B-v1-Q2_K.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-Q2_K.gguf) | Q2_K | | Very low quality but surprisingly usable. |
55
- | [Smegmma-Deluxe-9B-v1-IQ2_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ2_M.gguf) | IQ2_M | | Very low quality, uses SOTA techniques to also be surprisingly usable. |
56
- | [Smegmma-Deluxe-9B-v1-IQ2_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ2_S.gguf) | IQ2_S | | Very low quality, uses SOTA techniques to be usable. |
57
- | [Smegmma-Deluxe-9B-v1-IQ2_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF//main/Smegmma-Deluxe-9B-v1-IQ2_XS.gguf) | IQ2_XS | | Very low quality, uses SOTA techniques to be usable. |
58
 
59
  ## Credits
60
 
 
33
 
34
  | Filename | Quant type | File Size | Description |
35
  | -------- | ---------- | --------- | ----------- |
36
+ | [Smegmma-Deluxe-9B-v1-Q8_0.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q8_0.gguf) | Q8_0 | 9.82GB | Extremely high quality, generally unneeded but max available quant. |
37
+ | [Smegmma-Deluxe-9B-v1-Q6_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q6_K_L.gguf) | Q6_K_L | 7.81GB | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
38
+ | [Smegmma-Deluxe-9B-v1-Q6_K.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q6_K.gguf) | Q6_K | 7.58GB | Very high quality, near perfect, *recommended*. |
39
+ | [Smegmma-Deluxe-9B-v1-Q5_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q5_K_L.gguf) | Q5_K_L | 6.86GB | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
40
+ | [Smegmma-Deluxe-9B-v1-Q5_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q5_K_M.gguf) | Q5_K_M | 6.64GB | High quality, *recommended*. |
41
+ | [Smegmma-Deluxe-9B-v1-Q5_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q5_K_S.gguf) | Q5_K_S | 6.48GB | High quality, *recommended*. |
42
+ | [Smegmma-Deluxe-9B-v1-Q4_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q4_K_L.gguf) | Q4_K_L | 5.98GB | Uses Q8_0 for embed and output weights. Good quality, uses about 4.83 bits per weight, *recommended*. |
43
+ | [Smegmma-Deluxe-9B-v1-Q4_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q4_K_M.gguf) | Q4_K_M | 5.76GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
44
+ | [Smegmma-Deluxe-9B-v1-Q4_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q4_K_S.gguf) | Q4_K_S | 5.47GB | Slightly lower quality with more space savings, *recommended*. |
45
+ | [Smegmma-Deluxe-9B-v1-IQ4_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ4_XS.gguf) | IQ4_XS | 5.18GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
46
+ | [Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf) | Q3_K_XL | 5.35GB | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
47
+ | [Smegmma-Deluxe-9B-v1-Q3_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q3_K_L.gguf) | Q3_K_L | 5.13GB | Lower quality but usable, good for low RAM availability. |
48
+ | [Smegmma-Deluxe-9B-v1-Q3_K_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q3_K_M.gguf) | Q3_K_M | 4.76GB | Even lower quality. |
49
+ | [Smegmma-Deluxe-9B-v1-IQ3_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ3_M.gguf) | IQ3_M | 4.49GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
50
+ | [Smegmma-Deluxe-9B-v1-Q3_K_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q3_K_S.gguf) | Q3_K_S | 4.33GB | Low quality, not recommended. |
51
+ | [Smegmma-Deluxe-9B-v1-IQ3_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ3_XS.gguf) | IQ3_XS | 4.14GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
52
+ | [Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf) | IQ3_XXS | 3.79GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
53
+ | [Smegmma-Deluxe-9B-v1-Q2_K_L.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q2_K_L.gguf) | Q2_K_L | 4.02GB | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
54
+ | [Smegmma-Deluxe-9B-v1-Q2_K.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-Q2_K.gguf) | Q2_K | 3.80GB | Very low quality but surprisingly usable. |
55
+ | [Smegmma-Deluxe-9B-v1-IQ2_M.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ2_M.gguf) | IQ2_M | 3.43GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
56
+ | [Smegmma-Deluxe-9B-v1-IQ2_S.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ2_S.gguf) | IQ2_S | 3.21GB | Very low quality, uses SOTA techniques to be usable. |
57
+ | [Smegmma-Deluxe-9B-v1-IQ2_XS.gguf](https://huggingface.co/bartowski/Smegmma-Deluxe-9B-v1-GGUF/blob/main/Smegmma-Deluxe-9B-v1-IQ2_XS.gguf) | IQ2_XS | 3.06GB | Very low quality, uses SOTA techniques to be usable. |
58
 
59
  ## Credits
60
 
Smegmma-Deluxe-9B-v1-IQ2_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9dde8b761c17f9a1c1ffe6e944bc10e30526c30bbd1f2d1630cd4388505bc15f
3
+ size 3434669984
Smegmma-Deluxe-9B-v1-IQ2_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4e7674c35f26fa9f9d3e68f5357ce9c7ff158d2e73adf4c62e3a5e0a255eddfc
3
+ size 3211487136
Smegmma-Deluxe-9B-v1-IQ2_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2898d01e394e119fbdefac96aa47c409805ff9d594e1f71ed1ce0d60b698c305
3
+ size 3067381664
Smegmma-Deluxe-9B-v1-IQ3_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab015f134b687f42a9f525449bd4616dc21c23fef85c73c50ba40329f1708001
3
+ size 4494616480
Smegmma-Deluxe-9B-v1-IQ3_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33d163074fa314c21a06b1a736e4dd7bd51e586189318fe183c4df1a11feab88
3
+ size 4144990112
Smegmma-Deluxe-9B-v1-IQ3_XXS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf4562eaaf7149a557e1b2c89ac20082adece299e502913a0311ca3de46abf81
3
+ size 3796740000
Smegmma-Deluxe-9B-v1-IQ4_XS.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19e367cfabb4d9bec3c56197e20ec8068e22bf779ec593970a2d0a3c566714b1
3
+ size 5183031200
Smegmma-Deluxe-9B-v1-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c0499c9d544364116aaf09db5c0d9f85393776f18aace7e46b1651da67f22c8c
3
+ size 3805398944
Smegmma-Deluxe-9B-v1-Q2_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9923d5afc89f6315f9a4df11a59dc99fd4323882345303a58c04ee415b493031
3
+ size 4027606944
Smegmma-Deluxe-9B-v1-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61022c49e69409cf1edb908ee56ab2d08ccbd4132714fea92d1228e46d1b1313
3
+ size 5132453792
Smegmma-Deluxe-9B-v1-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:deabe65eca123377cd793db3dc922e3b32b7bf922f8e7a360f0fa0f2f87f334b
3
+ size 4761782176
Smegmma-Deluxe-9B-v1-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e728e7569e99a13a286ffa0b99c88321171937351982a84b738f6b73de685899
3
+ size 4337665952
Smegmma-Deluxe-9B-v1-Q3_K_XL.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b67fe6f4adb886634a08b7f569d5a813b196680a95c4389f21850625231a443a
3
+ size 5354661792
Smegmma-Deluxe-9B-v1-Q4_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2960996464c36452fd66494ea7de1b1eb799d1a70c3ef7989a746628cef00bf4
3
+ size 5983266720
Smegmma-Deluxe-9B-v1-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:732ecb253ea0115453438fc1f4e3e31507719ddcf81890a86ad1d734beefdb6f
3
+ size 5761058720
Smegmma-Deluxe-9B-v1-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0ac093bc21c0f9aabf48a5e188f028cce2b2a201455153e0d86dcd6bc9740891
3
+ size 5478926240
Smegmma-Deluxe-9B-v1-Q5_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b66a7361287fae230bee50f88bdcc4a60dece601d518c09d175ae9752d968b1
3
+ size 6869575584
Smegmma-Deluxe-9B-v1-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7668be2574ef13ceed6174599c389720cbf308b5e265bc847c6636187212d4e6
3
+ size 6647367584
Smegmma-Deluxe-9B-v1-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b5fb496defaf912d6731445179cd4d74b4cf6fab985a019bb8eb226f0dccc17
3
+ size 6483593120
Smegmma-Deluxe-9B-v1-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d5b63572ff92ff1a0ee63ccbceb37c0d895d2e04b83c024777c68ebe2455582e
3
+ size 7589070752
Smegmma-Deluxe-9B-v1-Q6_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:103c63b78e7a5f26eda578de86891ee120a52b4e6ce409a9fbea9fcd73e171af
3
+ size 7811278752
Smegmma-Deluxe-9B-v1-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:65aaa40ea32fe81f8088dc0f855d660dbed4261fd6e2a2ebcd5b5fadf19a9e17
3
+ size 9827149728