bartowski commited on
Commit
259fb73
1 Parent(s): ce24317

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -88
README.md CHANGED
@@ -185,119 +185,69 @@ extra_gated_fields:
185
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
186
  extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
187
  extra_gated_button_content: Submit
188
- widget:
189
- - example_title: Winter holidays
190
- messages:
191
- - role: system
192
- content: You are a helpful and honest assistant. Please, respond concisely and truthfully.
193
- - role: user
194
- content: Can you recommend a good destination for Winter holidays?
195
- - example_title: Programming assistant
196
- messages:
197
- - role: system
198
- content: You are a helpful and honest code and programming assistant. Please, respond concisely and truthfully.
199
- - role: user
200
- content: Write a function that computes the nth fibonacci number.
201
- inference:
202
- parameters:
203
- max_new_tokens: 300
204
- stop:
205
- - <|end_of_text|>
206
- - <|eot_id|>
207
  quantized_by: bartowski
 
 
 
 
 
 
 
 
 
208
  ---
209
 
210
- ## Llamacpp imatrix Quantizations of Meta-Llama-3-70B-Instruct
211
 
212
- Using <a href="https://github.com/ggerganov/llama.cpp/">llama.cpp</a> release <a href="https://github.com/ggerganov/llama.cpp/releases/tag/b2777">b2777</a> for quantization.
213
 
214
- Original model: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct
 
 
215
 
216
- All quants made using imatrix option with dataset provided by Kalomaze [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)
 
 
 
 
217
 
218
- ## Prompt format
219
 
220
- ```
221
- <|begin_of_text|><|start_header_id|>system<|end_header_id|>
222
 
223
- {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
224
-
225
- {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
226
 
 
227
 
228
  ```
 
229
 
230
- ## Download a file (not the whole branch) from below:
231
-
232
- | Filename | Quant type | File Size | Description |
233
- | -------- | ---------- | --------- | ----------- |
234
- | [Meta-Llama-3-70B-Instruct-Q8_0.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/tree/main/Meta-Llama-3-70B-Instruct-Q8_0.gguf) | Q8_0 | 74.97GB | Extremely high quality, generally unneeded but max available quant. |
235
- | [Meta-Llama-3-70B-Instruct-Q6_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/tree/main/Meta-Llama-3-70B-Instruct-Q6_K.gguf) | Q6_K | 57.88GB | Very high quality, near perfect, *recommended*. |
236
- | [Meta-Llama-3-70B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_M.gguf) | Q5_K_M | 49.94GB | High quality, *recommended*. |
237
- | [Meta-Llama-3-70B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q5_K_S.gguf) | Q5_K_S | 48.65GB | High quality, *recommended*. |
238
- | [Meta-Llama-3-70B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_M.gguf) | Q4_K_M | 42.52GB | Good quality, uses about 4.83 bits per weight, *recommended*. |
239
- | [Meta-Llama-3-70B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q4_K_S.gguf) | Q4_K_S | 40.34GB | Slightly lower quality with more space savings, *recommended*. |
240
- | [Meta-Llama-3-70B-Instruct-IQ4_NL.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_NL.gguf) | IQ4_NL | 40.05GB | Decent quality, slightly smaller than Q4_K_S with similar performance *recommended*. |
241
- | [Meta-Llama-3-70B-Instruct-IQ4_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ4_XS.gguf) | IQ4_XS | 37.90GB | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
242
- | [Meta-Llama-3-70B-Instruct-Q3_K_L.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_L.gguf) | Q3_K_L | 37.14GB | Lower quality but usable, good for low RAM availability. |
243
- | [Meta-Llama-3-70B-Instruct-Q3_K_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_M.gguf) | Q3_K_M | 34.26GB | Even lower quality. |
244
- | [Meta-Llama-3-70B-Instruct-IQ3_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_M.gguf) | IQ3_M | 31.93GB | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
245
- | [Meta-Llama-3-70B-Instruct-IQ3_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_S.gguf) | IQ3_S | 30.91GB | Lower quality, new method with decent performance, recommended over Q3_K_S quant, same size with better performance. |
246
- | [Meta-Llama-3-70B-Instruct-Q3_K_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q3_K_S.gguf) | Q3_K_S | 30.91GB | Low quality, not recommended. |
247
- | [Meta-Llama-3-70B-Instruct-IQ3_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_XS.gguf) | IQ3_XS | 29.30GB | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
248
- | [Meta-Llama-3-70B-Instruct-IQ3_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ3_XXS.gguf) | IQ3_XXS | 27.46GB | Lower quality, new method with decent performance, comparable to Q3 quants. |
249
- | [Meta-Llama-3-70B-Instruct-Q2_K.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-Q2_K.gguf) | Q2_K | 26.37GB | Very low quality but surprisingly usable. |
250
- | [Meta-Llama-3-70B-Instruct-IQ2_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_M.gguf) | IQ2_M | 24.11GB | Very low quality, uses SOTA techniques to also be surprisingly usable. |
251
- | [Meta-Llama-3-70B-Instruct-IQ2_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_S.gguf) | IQ2_S | 22.24GB | Very low quality, uses SOTA techniques to be usable. |
252
- | [Meta-Llama-3-70B-Instruct-IQ2_XS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_XS.gguf) | IQ2_XS | 21.14GB | Very low quality, uses SOTA techniques to be usable. |
253
- | [Meta-Llama-3-70B-Instruct-IQ2_XXS.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ2_XXS.gguf) | IQ2_XXS | 19.09GB | Lower quality, uses SOTA techniques to be usable. |
254
- | [Meta-Llama-3-70B-Instruct-IQ1_M.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_M.gguf) | IQ1_M | 16.75GB | Extremely low quality, *not* recommended. |
255
- | [Meta-Llama-3-70B-Instruct-IQ1_S.gguf](https://huggingface.co/bartowski/Meta-Llama-3-70B-Instruct-GGUF/blob/main/Meta-Llama-3-70B-Instruct-IQ1_S.gguf) | IQ1_S | 15.34GB | Extremely low quality, *not* recommended. |
256
-
257
- ## Downloading using huggingface-cli
258
-
259
- First, make sure you have hugginface-cli installed:
260
-
261
- ```
262
- pip install -U "huggingface_hub[cli]"
263
- ```
264
-
265
- Then, you can target the specific file you want:
266
-
267
- ```
268
- huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q4_K_M.gguf" --local-dir ./ --local-dir-use-symlinks False
269
- ```
270
 
271
- If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
272
 
273
  ```
274
- huggingface-cli download bartowski/Meta-Llama-3-70B-Instruct-GGUF --include "Meta-Llama-3-70B-Instruct-Q8_0.gguf/*" --local-dir Meta-Llama-3-70B-Instruct-Q8_0 --local-dir-use-symlinks False
275
- ```
276
-
277
- You can either specify a new local-dir (Meta-Llama-3-70B-Instruct-Q8_0) or download them all in place (./)
278
-
279
- ## Which file should I choose?
280
 
281
- A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
282
 
283
- The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
284
 
285
- If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
286
 
287
- If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
288
 
289
- Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
290
 
291
- If you don't want to think too much, grab one of the K-quants. These are in format 'QX_K_X', like Q5_K_M.
292
 
293
- If you want to get more into the weeds, you can check out this extremely useful feature chart:
294
 
295
- [llama.cpp feature matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
296
 
297
- But basically, if you're aiming for below Q4, and you're running cuBLAS (Nvidia) or rocBLAS (AMD), you should look towards the I-quants. These are in format IQX_X, like IQ3_M. These are newer and offer better performance for their size.
298
 
299
- These I-quants can also be used on CPU and Apple Metal, but will be slower than their K-quant equivalent, so speed vs performance is a tradeoff you'll have to decide.
300
 
301
- The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have an AMD card double check if you're using the rocBLAS build or the Vulcan build. At the time of writing this, LM Studio has a preview with ROCm support, and other inference engines have specific builds for ROCm.
302
 
303
- Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
185
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
186
  extra_gated_description: The information you provide will be collected, stored, processed and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
187
  extra_gated_button_content: Submit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188
  quantized_by: bartowski
189
+ lm_studio:
190
+ param_count: 70b
191
+ use_case: general
192
+ release_date: 18-04-2024
193
+ model_creator: meta-llama
194
+ prompt_template: Llama 3
195
+ system_prompt: You are a helpful AI assistant.
196
+ base_model: llama
197
+ original_repo: meta-llama/Meta-Llama-3-70B-Instruct
198
  ---
199
 
200
+ ## 💫 Community Model> Llama 3 70B Instruct by Meta
201
 
202
+ *👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
203
 
204
+ **Model creator:** [meta-llama](https://huggingface.co/meta-llama)<br>
205
+ **Original model**: [Meta-Llama-3-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct)<br>
206
+ **GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b2777](https://github.com/ggerganov/llama.cpp/releases/tag/b2777)<br>
207
 
208
+ ## Model Summary:
209
+ Llama 3 represents a huge update to the Llama family of models. This model is the 70B parameter instruction tuned model, with performance reaching and usually exceeding GPT-3.5.<br>
210
+ This is a massive milestone, as an open model reaches the performance of a closed model over double its size.<br>
211
+ This model is very happy to follow the given system prompt, so use this to your advantage to get the behavior you desire.<br>
212
+ Llama 3 excels at all the general usage situations, including multi turn conversations, general world knowledge, and coding.<br>
213
 
214
+ This model is made with the BPE fixes from llama.cpp
215
 
216
+ ## Prompt Template:
 
217
 
218
+ Choose the 'Llama 3' preset in your LM Studio.
 
 
219
 
220
+ Under the hood, the model will see a prompt that's formatted like so:
221
 
222
  ```
223
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
224
 
225
+ {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
226
 
227
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
228
 
229
  ```
 
 
 
 
 
 
230
 
231
+ Use cases and examples to come.
232
 
233
+ ## Technical Details
234
 
235
+ Llama 3 was trained on over 15T tokens from a massively diverse range of subjects and languages, and includes 4 times more code than Llama 2.
236
 
237
+ This model also features Grouped Attention Query (GQA) so that memory usage scales nicely over large contexts.
238
 
239
+ Instruction fine tuning was performed with a combination of supervised fine-tuning (SFT), rejection sampling, proximal policy optimization (PPO), and direct policy optimization (DPO).
240
 
241
+ Only IQ1_M and IQ2_XS use importance matrix (iMatrix), the rest are made with the standard quant algorithms.
242
 
243
+ Check out their blog post for more information [here](https://ai.meta.com/blog/meta-llama-3/)
244
 
245
+ ## Special thanks
246
 
247
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
248
 
249
+ 🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) for his dataset (linked [here](https://github.com/ggerganov/llama.cpp/discussions/5263)) that was used for calculating the imatrix for the IQ1_M and IQ2_XS quants, which makes them usable even at their tiny size!
250
 
251
+ ## Disclaimers
252
 
253
+ LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.