morriszms commited on
Commit
408c240
1 Parent(s): 7ce18ce

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ openbuddy-mistral-22b-v21.1-32k-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ openbuddy-mistral-22b-v21.1-32k-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ openbuddy-mistral-22b-v21.1-32k-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ openbuddy-mistral-22b-v21.1-32k-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ openbuddy-mistral-22b-v21.1-32k-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ openbuddy-mistral-22b-v21.1-32k-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ openbuddy-mistral-22b-v21.1-32k-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ openbuddy-mistral-22b-v21.1-32k-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ openbuddy-mistral-22b-v21.1-32k-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ openbuddy-mistral-22b-v21.1-32k-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ openbuddy-mistral-22b-v21.1-32k-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ openbuddy-mistral-22b-v21.1-32k-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - zh
5
+ - en
6
+ - fr
7
+ - de
8
+ - ja
9
+ - ko
10
+ - it
11
+ - ru
12
+ - fi
13
+ pipeline_tag: text-generation
14
+ inference: false
15
+ library_name: transformers
16
+ tags:
17
+ - mixtral
18
+ - TensorBlock
19
+ - GGUF
20
+ base_model: OpenBuddy/openbuddy-mistral-22b-v21.1-32k
21
+ ---
22
+
23
+ <div style="width: auto; margin-left: auto; margin-right: auto">
24
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
25
+ </div>
26
+ <div style="display: flex; justify-content: space-between; width: 100%;">
27
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
28
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
29
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
30
+ </p>
31
+ </div>
32
+ </div>
33
+
34
+ ## OpenBuddy/openbuddy-mistral-22b-v21.1-32k - GGUF
35
+
36
+ This repo contains GGUF format model files for [OpenBuddy/openbuddy-mistral-22b-v21.1-32k](https://huggingface.co/OpenBuddy/openbuddy-mistral-22b-v21.1-32k).
37
+
38
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
39
+
40
+ <div style="text-align: left; margin: 20px 0;">
41
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
42
+ Run them on the TensorBlock client using your local machine ↗
43
+ </a>
44
+ </div>
45
+
46
+ ## Prompt template
47
+
48
+ ```
49
+ <|role|>system<|says|>{system_prompt}<|end|>
50
+ <|role|>user<|says|>{prompt}<|end|>
51
+ <|role|>assistant<|says|>
52
+ ```
53
+
54
+ ## Model file specification
55
+
56
+ | Filename | Quant type | File Size | Description |
57
+ | -------- | ---------- | --------- | ----------- |
58
+ | [openbuddy-mistral-22b-v21.1-32k-Q2_K.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q2_K.gguf) | Q2_K | 7.761 GB | smallest, significant quality loss - not recommended for most purposes |
59
+ | [openbuddy-mistral-22b-v21.1-32k-Q3_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q3_K_S.gguf) | Q3_K_S | 9.042 GB | very small, high quality loss |
60
+ | [openbuddy-mistral-22b-v21.1-32k-Q3_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q3_K_M.gguf) | Q3_K_M | 10.080 GB | very small, high quality loss |
61
+ | [openbuddy-mistral-22b-v21.1-32k-Q3_K_L.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q3_K_L.gguf) | Q3_K_L | 10.987 GB | small, substantial quality loss |
62
+ | [openbuddy-mistral-22b-v21.1-32k-Q4_0.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q4_0.gguf) | Q4_0 | 11.775 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
63
+ | [openbuddy-mistral-22b-v21.1-32k-Q4_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q4_K_S.gguf) | Q4_K_S | 11.860 GB | small, greater quality loss |
64
+ | [openbuddy-mistral-22b-v21.1-32k-Q4_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q4_K_M.gguf) | Q4_K_M | 12.494 GB | medium, balanced quality - recommended |
65
+ | [openbuddy-mistral-22b-v21.1-32k-Q5_0.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q5_0.gguf) | Q5_0 | 14.348 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
66
+ | [openbuddy-mistral-22b-v21.1-32k-Q5_K_S.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q5_K_S.gguf) | Q5_K_S | 14.348 GB | large, low quality loss - recommended |
67
+ | [openbuddy-mistral-22b-v21.1-32k-Q5_K_M.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q5_K_M.gguf) | Q5_K_M | 14.718 GB | large, very low quality loss - recommended |
68
+ | [openbuddy-mistral-22b-v21.1-32k-Q6_K.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q6_K.gguf) | Q6_K | 17.081 GB | very large, extremely low quality loss |
69
+ | [openbuddy-mistral-22b-v21.1-32k-Q8_0.gguf](https://huggingface.co/tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF/blob/main/openbuddy-mistral-22b-v21.1-32k-Q8_0.gguf) | Q8_0 | 22.123 GB | very large, extremely low quality loss - not recommended |
70
+
71
+
72
+ ## Downloading instruction
73
+
74
+ ### Command line
75
+
76
+ Firstly, install Huggingface Client
77
+
78
+ ```shell
79
+ pip install -U "huggingface_hub[cli]"
80
+ ```
81
+
82
+ Then, downoad the individual model file the a local directory
83
+
84
+ ```shell
85
+ huggingface-cli download tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF --include "openbuddy-mistral-22b-v21.1-32k-Q2_K.gguf" --local-dir MY_LOCAL_DIR
86
+ ```
87
+
88
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
89
+
90
+ ```shell
91
+ huggingface-cli download tensorblock/openbuddy-mistral-22b-v21.1-32k-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
92
+ ```
openbuddy-mistral-22b-v21.1-32k-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:36a484c43fd009f1ec691b86ae0eb7c327dfe819522aac02c074c658e521cf03
3
+ size 8333682400
openbuddy-mistral-22b-v21.1-32k-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fefc581e1b6b59db884ff0ab385da717839aa12676c01dfb81b5512f3ca247e3
3
+ size 11797448416
openbuddy-mistral-22b-v21.1-32k-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ccf0c91f2663a9e43d7dae4a5c95cf8b256cdd9ee78a6b0f8b99b58634453bee
3
+ size 10823845600
openbuddy-mistral-22b-v21.1-32k-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a571a330a7b411f61b51b2fd22e070cdc31a4c8412ab6f31c7fbbf5de4d62d3
3
+ size 9708291808
openbuddy-mistral-22b-v21.1-32k-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cf661d08212f029f514d759b01a52c06b0416890e321d4de964d3b860f7881a7
3
+ size 12643280608
openbuddy-mistral-22b-v21.1-32k-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e063c8bcd1bf5a927e14acc03625c54a32b02bbb70804de1d8ebd1822819d082
3
+ size 13415360224
openbuddy-mistral-22b-v21.1-32k-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bb60f9475e458258273d00162dc445faebb25209d94378dbd950ce90f76f345d
3
+ size 12734506720
openbuddy-mistral-22b-v21.1-32k-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:64d5235d38ad0297dca836c24f2eacdb60567e6c4489682ebc3db0ac17863b99
3
+ size 15405623008
openbuddy-mistral-22b-v21.1-32k-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ae9bcf460eae00de43163ca245653587f94d064285eac5126891ac125b19d92
3
+ size 15803360992
openbuddy-mistral-22b-v21.1-32k-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ddc8d04d432a18215f0d78301f2ce0213be77adeed3ba021acf1ff653628d234
3
+ size 15405623008
openbuddy-mistral-22b-v21.1-32k-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f156139e016fca175cc5243c2cdff48da4c562bf09c94b1ef74e75e96a32e7ba
3
+ size 18340611808
openbuddy-mistral-22b-v21.1-32k-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cd5fe6212f6283a39376a99ba45f82da02569b598a8467d98a072958041cb8f
3
+ size 23754360544