Add q8_0.bin in two-part ZIP archive.
Browse files- .gitattributes +1 -0
- README.md +17 -1
- guanaco-65B.ggmlv3.q8_0.z01 +3 -0
- guanaco-65B.ggmlv3.q8_0.zip +3 -0
.gitattributes
CHANGED
@@ -32,3 +32,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
32 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
32 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
33 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
34 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
35 |
+
guanaco-65B.ggmlv3.q8_0.z01 filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -32,7 +32,23 @@ I have quantised the GGML files in this repo with the latest version. Therefore
|
|
32 |
| guanaco-65B.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
33 |
| guanaco-65B.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
34 |
| guanaco-65B.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
|
|
|
35 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
## How to run in `llama.cpp`
|
38 |
|
@@ -54,4 +70,4 @@ Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](http
|
|
54 |
|
55 |
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
56 |
|
57 |
-
# Original model card: Tim Dettmers' Guanaco 65B
|
|
|
32 |
| guanaco-65B.ggmlv3.q4_1.bin | q4_1 | 4 | 40.81 GB | 43.31 GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
|
33 |
| guanaco-65B.ggmlv3.q5_0.bin | q5_0 | 5 | 44.89 GB | 47.39 GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
|
34 |
| guanaco-65B.ggmlv3.q5_1.bin | q5_1 | 5 | 48.97 GB | 51.47 GB | 5-bit. Even higher accuracy, resource usage and slower inference. |
|
35 |
+
| guanaco-65B.ggmlv3.q8_0.bin | q8_0 | 8 | 69.370 GB | 71.87 GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for most use cases. |
|
36 |
|
37 |
+
### q8_0 file requires expansion from archive
|
38 |
+
|
39 |
+
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the q8_0 file in a multi-part ZIP file. The ZIP is not compressed, it is just storing the .bin file in two parts.
|
40 |
+
|
41 |
+
To decompress it, please download
|
42 |
+
* `guanaco-65B.ggmlv3.q8_0.zip`
|
43 |
+
* `guanaco-65B.ggmlv3.q8_0.z01`
|
44 |
+
|
45 |
+
and extract the .zip archive. This will will expand both parts automatically. On Linux I found I had to use `7zip` - the basic `unzip` tool did not work. Example:
|
46 |
+
```
|
47 |
+
sudo apt update -y && sudo apt install 7zip
|
48 |
+
7zz x guanaco-65B.ggmlv3.q8_0.zip # Once the q8_0.bin is extracted you can delete the .zip and .z01
|
49 |
+
```
|
50 |
+
|
51 |
+
On Windows you can hopefully just double-click the ZIP and extract it. If that fails, download WinRAR.
|
52 |
|
53 |
## How to run in `llama.cpp`
|
54 |
|
|
|
70 |
|
71 |
Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
|
72 |
|
73 |
+
# Original model card: Tim Dettmers' Guanaco 65B
|
guanaco-65B.ggmlv3.q8_0.z01
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:28945aff5504d8e6e2b35ac73b34766b739e2458218a1c80a90c9889faa35165
|
3 |
+
size 41943040000
|
guanaco-65B.ggmlv3.q8_0.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a398b66af93eb96d8f92506f242f459e248e1f783937dfd48a2e0002cfc1e51e
|
3 |
+
size 27427327940
|