TheBloke commited on
Commit
8d2cb50
1 Parent(s): 7fb7be5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md CHANGED
@@ -116,8 +116,46 @@ Refer to the Provided Files table below to see what files use which methods, and
116
  | [llama-2-70b-orca-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
117
  | [llama-2-70b-orca-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
118
  | [llama-2-70b-orca-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
 
 
119
 
120
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
  <!-- README_GGUF.md-provided-files end -->
122
 
123
  <!-- README_GGUF.md-how-to-run start -->
 
116
  | [llama-2-70b-orca-200k.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
117
  | [llama-2-70b-orca-200k.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
118
  | [llama-2-70b-orca-200k.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-70B-Orca-200k-GGUF/blob/main/llama-2-70b-orca-200k.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
119
+ | llama-2-70b-orca-200k.Q6_K.bin | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
120
+ | llama-2-70b-orca-200k.Q8_0.bin | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
121
 
122
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
123
+
124
+ ### Q6_K and Q8_0 files are split and require joining
125
+
126
+ **Note:** HF does not support uploading files larger than 50GB. Therefore I have uploaded the Q6_K and Q8_0 files as split files.
127
+
128
+ <details>
129
+ <summary>Click for instructions regarding Q6_K and Q8_0 files</summary>
130
+
131
+ ### q6_K
132
+ Please download:
133
+ * `llama-2-70b-orca-200k.Q6_K.gguf-split-a`
134
+ * `llama-2-70b-orca-200k.Q6_K.gguf-split-b`
135
+
136
+ ### q8_0
137
+ Please download:
138
+ * `llama-2-70b-orca-200k.Q8_0.gguf-split-a`
139
+ * `llama-2-70b-orca-200k.Q8_0.gguf-split-b`
140
+
141
+ To join the files, do the following:
142
+
143
+ Linux and macOS:
144
+ ```
145
+ cat llama-2-70b-orca-200k.Q6_K.gguf-split-* > llama-2-70b-orca-200k.Q6_K.gguf && rm llama-2-70b-orca-200k.Q6_K.gguf-split-*
146
+ cat llama-2-70b-orca-200k.Q8_0.gguf-split-* > llama-2-70b-orca-200k.Q8_0.gguf && rm llama-2-70b-orca-200k.Q8_0.gguf-split-*
147
+ ```
148
+ Windows command line:
149
+ ```
150
+ COPY /B llama-2-70b-orca-200k.Q6_K.gguf-split-a + llama-2-70b-orca-200k.Q6_K.gguf-split-b llama-2-70b-orca-200k.Q6_K.gguf
151
+ del llama-2-70b-orca-200k.Q6_K.gguf-split-a llama-2-70b-orca-200k.Q6_K.gguf-split-b
152
+
153
+ COPY /B llama-2-70b-orca-200k.Q8_0.gguf-split-a + llama-2-70b-orca-200k.Q8_0.gguf-split-b llama-2-70b-orca-200k.Q8_0.gguf
154
+ del llama-2-70b-orca-200k.Q8_0.gguf-split-a llama-2-70b-orca-200k.Q8_0.gguf-split-b
155
+ ```
156
+
157
+ </details>
158
+
159
  <!-- README_GGUF.md-provided-files end -->
160
 
161
  <!-- README_GGUF.md-how-to-run start -->