TheBloke commited on
Commit
fffa7ea
·
1 Parent(s): 2029565

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -117,8 +117,46 @@ Refer to the Provided Files table below to see what files use which methods, and
117
  | [model_007-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
118
  | [model_007-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
119
  | [model_007-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
 
 
120
 
121
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
122
  <!-- README_GGUF.md-provided-files end -->
123
 
124
  <!-- README_GGUF.md-how-to-run start -->
 
117
  | [model_007-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
118
  | [model_007-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
119
  | [model_007-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/model_007-70B-GGUF/blob/main/model_007-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
120
+ | model_007-70b.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
121
+ | model_007-70b.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
122
 
123
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
124
+
125
+ ### Q6_K and Q8_0 files are split and require joining
126
+
127
+ **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
128
+
129
+ <details>
130
+ <summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
131
+
132
+ ### q6_K
133
+ Please download:
134
+ * `model_007-70b.Q6_K.gguf-split-a`
135
+ * `model_007-70b.Q6_K.gguf-split-b`
136
+
137
+ ### q8_0
138
+ Please download:
139
+ * `model_007-70b.Q8_0.gguf-split-a`
140
+ * `model_007-70b.Q8_0.gguf-split-b`
141
+
142
+ To join the files, do the following:
143
+
144
+ Linux and macOS:
145
+ ```
146
+ cat model_007-70b.Q6_K.gguf-split-* > model_007-70b.Q6_K.gguf && rm model_007-70b.Q6_K.gguf-split-*
147
+ cat model_007-70b.Q8_0.gguf-split-* > model_007-70b.Q8_0.gguf && rm model_007-70b.Q8_0.gguf-split-*
148
+ ```
149
+ Windows command line:
150
+ ```
151
+ COPY /B model_007-70b.Q6_K.gguf-split-a + model_007-70b.Q6_K.gguf-split-b model_007-70b.Q6_K.gguf
152
+ del model_007-70b.Q6_K.gguf-split-a model_007-70b.Q6_K.gguf-split-b
153
+
154
+ COPY /B model_007-70b.Q8_0.gguf-split-a + model_007-70b.Q8_0.gguf-split-b model_007-70b.Q8_0.gguf
155
+ del model_007-70b.Q8_0.gguf-split-a model_007-70b.Q8_0.gguf-split-b
156
+ ```
157
+
158
+ </details>
159
+
160
  <!-- README_GGUF.md-provided-files end -->
161
 
162
  <!-- README_GGUF.md-how-to-run start -->