Initial GGUF model commit
Browse files
README.md
CHANGED
@@ -109,54 +109,22 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
109 |
|
110 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
111 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
|
|
112 |
| [model_007-70b.Q2_K.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
113 |
| [model_007-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
114 |
| [model_007-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
115 |
| [model_007-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
|
|
|
|
|
|
|
|
116 |
| [model_007-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
117 |
| [model_007-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
118 |
| [model_007-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
|
|
119 |
| [model_007-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
120 |
-
| model_007-70b.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
|
121 |
-
| model_007-70b.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
|
122 |
|
123 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
124 |
-
|
125 |
-
### Q6_K and Q8_0 files are split and require joining
|
126 |
-
|
127 |
-
**Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
|
128 |
-
|
129 |
-
<details>
|
130 |
-
<summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
|
131 |
-
|
132 |
-
### q6_K
|
133 |
-
Please download:
|
134 |
-
* `model_007-70b.Q6_K.gguf-split-a`
|
135 |
-
* `model_007-70b.Q6_K.gguf-split-b`
|
136 |
-
|
137 |
-
### q8_0
|
138 |
-
Please download:
|
139 |
-
* `model_007-70b.Q8_0.gguf-split-a`
|
140 |
-
* `model_007-70b.Q8_0.gguf-split-b`
|
141 |
-
|
142 |
-
To join the files, do the following:
|
143 |
-
|
144 |
-
Linux and macOS:
|
145 |
-
```
|
146 |
-
cat model_007-70b.Q6_K.gguf-split-* > model_007-70b.Q6_K.gguf && rm model_007-70b.Q6_K.gguf-split-*
|
147 |
-
cat model_007-70b.Q8_0.gguf-split-* > model_007-70b.Q8_0.gguf && rm model_007-70b.Q8_0.gguf-split-*
|
148 |
-
```
|
149 |
-
Windows command line:
|
150 |
-
```
|
151 |
-
COPY /B model_007-70b.Q6_K.gguf-split-a + model_007-70b.Q6_K.gguf-split-b model_007-70b.Q6_K.gguf
|
152 |
-
del model_007-70b.Q6_K.gguf-split-a model_007-70b.Q6_K.gguf-split-b
|
153 |
-
|
154 |
-
COPY /B model_007-70b.Q8_0.gguf-split-a + model_007-70b.Q8_0.gguf-split-b model_007-70b.Q8_0.gguf
|
155 |
-
del model_007-70b.Q8_0.gguf-split-a model_007-70b.Q8_0.gguf-split-b
|
156 |
-
```
|
157 |
-
|
158 |
-
</details>
|
159 |
-
|
160 |
<!-- README_GGUF.md-provided-files end -->
|
161 |
|
162 |
<!-- README_GGUF.md-how-to-run start -->
|
|
|
109 |
|
110 |
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
111 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
112 |
+
| [model_007-70b.Q6_K.gguf-split-b](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q6_K.gguf-split-b) | Q6_K | 6 | 19.89 GB| 22.39 GB | very large, extremely low quality loss |
|
113 |
| [model_007-70b.Q2_K.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
|
114 |
| [model_007-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
|
115 |
| [model_007-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
|
116 |
| [model_007-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
|
117 |
+
| [model_007-70b.Q8_0.gguf-split-b](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q8_0.gguf-split-b) | Q8_0 | 8 | 36.59 GB| 39.09 GB | very large, extremely low quality loss - not recommended |
|
118 |
+
| [model_007-70b.Q6_K.gguf-split-a](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q6_K.gguf-split-a) | Q6_K | 6 | 36.70 GB| 39.20 GB | very large, extremely low quality loss |
|
119 |
+
| [model_007-70b.Q8_0.gguf-split-a](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q8_0.gguf-split-a) | Q8_0 | 8 | 36.70 GB| 39.20 GB | very large, extremely low quality loss - not recommended |
|
120 |
+
| [model_007-70b.Q4_0.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
121 |
| [model_007-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
|
122 |
| [model_007-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
|
123 |
| [model_007-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
|
124 |
+
| [model_007-70b.Q5_0.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
|
125 |
| [model_007-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
|
|
|
|
|
126 |
|
127 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
128 |
<!-- README_GGUF.md-provided-files end -->
|
129 |
|
130 |
<!-- README_GGUF.md-how-to-run start -->
|