andrijdavid
commited on
Commit
•
1ba6596
1
Parent(s):
8df11c1
Upload folder using huggingface_hub
Browse files- .gitattributes +18 -0
- Mistral-7B-Merge-14-v0.1-Q2_K.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q3_K.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q3_K_L.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q3_K_M.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q3_K_S.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q4_0.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q4_1.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q4_K.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q4_K_M.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q4_K_S.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q5_0.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q5_1.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q5_K.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q5_K_M.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q5_K_S.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q6_K.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-Q8_0.gguf +3 -0
- Mistral-7B-Merge-14-v0.1-f16.gguf +3 -0
- README.md +5 -3
.gitattributes
CHANGED
@@ -33,3 +33,21 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
+
Mistral-7B-Merge-14-v0.1-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
37 |
+
Mistral-7B-Merge-14-v0.1-Q3_K.gguf filter=lfs diff=lfs merge=lfs -text
|
38 |
+
Mistral-7B-Merge-14-v0.1-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
|
39 |
+
Mistral-7B-Merge-14-v0.1-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
40 |
+
Mistral-7B-Merge-14-v0.1-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
+
Mistral-7B-Merge-14-v0.1-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
+
Mistral-7B-Merge-14-v0.1-Q4_1.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
Mistral-7B-Merge-14-v0.1-Q4_K.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
Mistral-7B-Merge-14-v0.1-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
Mistral-7B-Merge-14-v0.1-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
Mistral-7B-Merge-14-v0.1-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
Mistral-7B-Merge-14-v0.1-Q5_1.gguf filter=lfs diff=lfs merge=lfs -text
|
48 |
+
Mistral-7B-Merge-14-v0.1-Q5_K.gguf filter=lfs diff=lfs merge=lfs -text
|
49 |
+
Mistral-7B-Merge-14-v0.1-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
50 |
+
Mistral-7B-Merge-14-v0.1-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
51 |
+
Mistral-7B-Merge-14-v0.1-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
52 |
+
Mistral-7B-Merge-14-v0.1-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
53 |
+
Mistral-7B-Merge-14-v0.1-f16.gguf filter=lfs diff=lfs merge=lfs -text
|
Mistral-7B-Merge-14-v0.1-Q2_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e83cda96ffb5a9a0d4017d9746c8bb8ac9dad3896ed25420b5e9a0b960d601ce
|
3 |
+
size 3084034624
|
Mistral-7B-Merge-14-v0.1-Q3_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a17d85229098a85a9a35466d3686b4ae187009aace1f187c731898f9d6799513
|
3 |
+
size 3519922752
|
Mistral-7B-Merge-14-v0.1-Q3_K_L.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0ec4e4d0728a29f3bec47ec45fbac68707080ff7d3a6b11044f21ca63946759f
|
3 |
+
size 3822961216
|
Mistral-7B-Merge-14-v0.1-Q3_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a17d85229098a85a9a35466d3686b4ae187009aace1f187c731898f9d6799513
|
3 |
+
size 3519922752
|
Mistral-7B-Merge-14-v0.1-Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6db424574577430d9f4ce172bcf99091227e556e0114b8b7e498d35bfe36f086
|
3 |
+
size 3165504064
|
Mistral-7B-Merge-14-v0.1-Q4_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:751fc918494cf5f4998ecfde96ab161514323ceda045f206080bdd550e38aab6
|
3 |
+
size 4109853248
|
Mistral-7B-Merge-14-v0.1-Q4_1.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:beca274d8e101b5772dd701e6953c5b1964dc264ce0b344f8111618a868669ce
|
3 |
+
size 4554252864
|
Mistral-7B-Merge-14-v0.1-Q4_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19bcddfaf785c577d979d31f12c8423ac630460fd348b26b9f46aed2d7fb8a65
|
3 |
+
size 4369375808
|
Mistral-7B-Merge-14-v0.1-Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:19bcddfaf785c577d979d31f12c8423ac630460fd348b26b9f46aed2d7fb8a65
|
3 |
+
size 4369375808
|
Mistral-7B-Merge-14-v0.1-Q4_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2738eef15e265f97cd51ba615649f3b8e7f5c2b85227100a97fc05fae65d131b
|
3 |
+
size 4141310528
|
Mistral-7B-Merge-14-v0.1-Q5_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a0042f33bc572ba80a8db4ebe6e3993febb2e646247292e2b2c9df41e8bea778
|
3 |
+
size 4998652480
|
Mistral-7B-Merge-14-v0.1-Q5_1.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6ca90da70da35df40d749fbd8d85f94830f789d086febee6c756175336c22131
|
3 |
+
size 5443052096
|
Mistral-7B-Merge-14-v0.1-Q5_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:070ac877f543c61b47338b558f1df7c7881e3579b2e10dc9b97917754b8f4b1f
|
3 |
+
size 5132345920
|
Mistral-7B-Merge-14-v0.1-Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:070ac877f543c61b47338b558f1df7c7881e3579b2e10dc9b97917754b8f4b1f
|
3 |
+
size 5132345920
|
Mistral-7B-Merge-14-v0.1-Q5_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5ba84a080e8f78f256cdaa99eb1697faad6acd08ffe24c29c65371e965ba1817
|
3 |
+
size 4998652480
|
Mistral-7B-Merge-14-v0.1-Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:1a75ba63f1f7ac85a42f0fdea744722bf8b7dc3fc4372e6bc70dfaff2b0f8274
|
3 |
+
size 5943001664
|
Mistral-7B-Merge-14-v0.1-Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc9835bd5e3b3cc718b38ee280d45985580ced70a3dc6b3adc46f7c5a12c6694
|
3 |
+
size 7696794176
|
Mistral-7B-Merge-14-v0.1-f16.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9cfb9debd66c642b2a2e0292ee466b34c5966aac8ffa500b17082aba231ff9aa
|
3 |
+
size 14485668384
|
README.md
CHANGED
@@ -2,9 +2,11 @@
|
|
2 |
language:
|
3 |
- en
|
4 |
license: cc-by-nc-4.0
|
|
|
|
|
5 |
quantized_by: andrijdavid
|
6 |
---
|
7 |
-
# Mistral-7B-Merge-14-v0.1-
|
8 |
- Original model: [Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
|
9 |
|
10 |
<!-- description start -->
|
@@ -36,12 +38,12 @@ Here is an incomplete list of clients and libraries that are known to support GG
|
|
36 |
<details>
|
37 |
<summary>Click to see details</summary>
|
38 |
The new methods available are:
|
|
|
39 |
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
40 |
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
41 |
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
42 |
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
43 |
-
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
|
44 |
-
Refer to the Provided Files table below to see what files use which methods, and how.
|
45 |
</details>
|
46 |
<!-- compatibility_gguf end -->
|
47 |
|
|
|
2 |
language:
|
3 |
- en
|
4 |
license: cc-by-nc-4.0
|
5 |
+
tags:
|
6 |
+
- GGUF
|
7 |
quantized_by: andrijdavid
|
8 |
---
|
9 |
+
# Mistral-7B-Merge-14-v0.1-GGUF
|
10 |
- Original model: [Mistral-7B-Merge-14-v0.1](https://huggingface.co/EmbeddedLLM/Mistral-7B-Merge-14-v0.1)
|
11 |
|
12 |
<!-- description start -->
|
|
|
38 |
<details>
|
39 |
<summary>Click to see details</summary>
|
40 |
The new methods available are:
|
41 |
+
|
42 |
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
|
43 |
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
|
44 |
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
|
45 |
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
|
46 |
+
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw.
|
|
|
47 |
</details>
|
48 |
<!-- compatibility_gguf end -->
|
49 |
|