TheBloke commited on
Commit
7de0552
1 Parent(s): 2accbf9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -128,8 +128,46 @@ Refer to the Provided Files table below to see what files use which methods, and
128
  | [lemur-70b-chat-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
129
  | [lemur-70b-chat-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
130
  | [lemur-70b-chat-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
 
 
131
 
132
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  <!-- README_GGUF.md-provided-files end -->
134
 
135
  <!-- README_GGUF.md-how-to-run start -->
 
128
  | [lemur-70b-chat-v1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
129
  | [lemur-70b-chat-v1.Q5_K_S.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
130
  | [lemur-70b-chat-v1.Q5_K_M.gguf](https://huggingface.co/TheBloke/Lemur-70B-Chat-v1-GGUF/blob/main/lemur-70b-chat-v1.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
131
+ | lemur-70b-chat-v1.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
132
+ | lemur-70b-chat-v1.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
133
 
134
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
135
+
136
+ ### Q6_K and Q8_0 files are split and require joining
137
+
138
+ **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
139
+
140
+ <details>
141
+ <summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
142
+
143
+ ### q6_K
144
+ Please download:
145
+ * `lemur-70b-chat-v1.Q6_K.gguf-split-a`
146
+ * `lemur-70b-chat-v1.Q6_K.gguf-split-b`
147
+
148
+ ### q8_0
149
+ Please download:
150
+ * `lemur-70b-chat-v1.Q8_0.gguf-split-a`
151
+ * `lemur-70b-chat-v1.Q8_0.gguf-split-b`
152
+
153
+ To join the files, do the following:
154
+
155
+ Linux and macOS:
156
+ ```
157
+ cat lemur-70b-chat-v1.Q6_K.gguf-split-* > lemur-70b-chat-v1.Q6_K.gguf && rm lemur-70b-chat-v1.Q6_K.gguf-split-*
158
+ cat lemur-70b-chat-v1.Q8_0.gguf-split-* > lemur-70b-chat-v1.Q8_0.gguf && rm lemur-70b-chat-v1.Q8_0.gguf-split-*
159
+ ```
160
+ Windows command line:
161
+ ```
162
+ COPY /B lemur-70b-chat-v1.Q6_K.gguf-split-a + lemur-70b-chat-v1.Q6_K.gguf-split-b lemur-70b-chat-v1.Q6_K.gguf
163
+ del lemur-70b-chat-v1.Q6_K.gguf-split-a lemur-70b-chat-v1.Q6_K.gguf-split-b
164
+
165
+ COPY /B lemur-70b-chat-v1.Q8_0.gguf-split-a + lemur-70b-chat-v1.Q8_0.gguf-split-b lemur-70b-chat-v1.Q8_0.gguf
166
+ del lemur-70b-chat-v1.Q8_0.gguf-split-a lemur-70b-chat-v1.Q8_0.gguf-split-b
167
+ ```
168
+
169
+ </details>
170
+
171
  <!-- README_GGUF.md-provided-files end -->
172
 
173
  <!-- README_GGUF.md-how-to-run start -->