TheBloke's LLM work is generously supported by a grant from andreessen horowitz (a16z)
This repo contains GGML format model files for Meta's Llama 2 13B.
The GGML format has now been superseded by GGUF. As of August 21st 2023, llama.cpp no longer supports GGML models. Third party clients and libraries are expected to still support it for a time, but many may also drop support.
Please use the GGUF models instead.
GGML files are for CPU + GPU inference using llama.cpp and libraries and UIs which support this format, such as:
- text-generation-webui, the most popular web UI. Supports NVidia CUDA GPU acceleration.
- KoboldCpp, a powerful GGML web UI with GPU acceleration on all platforms (CUDA and OpenCL). Especially good for story telling.
- LM Studio, a fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
- LoLLMS Web UI, a great web UI with CUDA GPU acceleration via the c_transformers backend.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
- llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference
- 2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)
- Meta's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
These quantised GGML files are compatible with llama.cpp between June 6th (commit
2d43387) and August 21st 2023.
For support with latest llama.cpp, please use GGUF files instead.
The final llama.cpp commit with support for GGML was: dadbed99e65252d79f81101a392d0d6497b86caa
As of August 23rd 2023 they are still compatible with all UIs, libraries and utilities which use GGML. This may change in the future.
Click to see details
The new methods available are:
- GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
- GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
- GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
- GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
- GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
- GGML_TYPE_Q8_K - "type-0" 8-bit quantization. Only used for quantizing intermediate results. The difference to the existing Q8_0 is that the block size is 256. All 2-6 bit dot products are implemented for this quantization type.
Refer to the Provided Files table below to see what files use which methods, and how.
|Name||Quant method||Bits||Size||Max RAM required||Use case|
|llama-2-13b.ggmlv3.q2_K.bin||q2_K||2||5.51 GB||8.01 GB||New k-quant method. Uses GGML_TYPE_Q4_K for the attention.vw and feed_forward.w2 tensors, GGML_TYPE_Q2_K for the other tensors.|
|llama-2-13b.ggmlv3.q3_K_S.bin||q3_K_S||3||5.66 GB||8.16 GB||New k-quant method. Uses GGML_TYPE_Q3_K for all tensors|
|llama-2-13b.ggmlv3.q3_K_M.bin||q3_K_M||3||6.31 GB||8.81 GB||New k-quant method. Uses GGML_TYPE_Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K|
|llama-2-13b.ggmlv3.q3_K_L.bin||q3_K_L||3||6.93 GB||9.43 GB||New k-quant method. Uses GGML_TYPE_Q5_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else GGML_TYPE_Q3_K|
|llama-2-13b.ggmlv3.q4_0.bin||q4_0||4||7.32 GB||9.82 GB||Original quant method, 4-bit.|
|llama-2-13b.ggmlv3.q4_K_S.bin||q4_K_S||4||7.37 GB||9.87 GB||New k-quant method. Uses GGML_TYPE_Q4_K for all tensors|
|llama-2-13b.ggmlv3.q4_K_M.bin||q4_K_M||4||7.87 GB||10.37 GB||New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q4_K|
|llama-2-13b.ggmlv3.q4_1.bin||q4_1||4||8.14 GB||10.64 GB||Original quant method, 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models.|
|llama-2-13b.ggmlv3.q5_0.bin||q5_0||5||8.95 GB||11.45 GB||Original quant method, 5-bit. Higher accuracy, higher resource usage and slower inference.|
|llama-2-13b.ggmlv3.q5_K_S.bin||q5_K_S||5||8.97 GB||11.47 GB||New k-quant method. Uses GGML_TYPE_Q5_K for all tensors|
|llama-2-13b.ggmlv3.q5_K_M.bin||q5_K_M||5||9.23 GB||11.73 GB||New k-quant method. Uses GGML_TYPE_Q6_K for half of the attention.wv and feed_forward.w2 tensors, else GGML_TYPE_Q5_K|
|llama-2-13b.ggmlv3.q5_1.bin||q5_1||5||9.76 GB||12.26 GB||Original quant method, 5-bit. Even higher accuracy, resource usage and slower inference.|
|llama-2-13b.ggmlv3.q6_K.bin||q6_K||6||10.68 GB||13.18 GB||New k-quant method. Uses GGML_TYPE_Q8_K for all tensors - 6-bit quantization|
|llama-2-13b.ggmlv3.q8_0.bin||q8_0||8||13.83 GB||16.33 GB||Original quant method, 8-bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users.|
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
Make sure you are using
llama.cpp from commit dadbed99e65252d79f81101a392d0d6497b86caa or earlier.
For compatibility with latest llama.cpp, please use GGUF files instead.
./main -t 10 -ngl 32 -m llama-2-13b.ggmlv3.q4_K_M.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "Write a story about llamas"
-t 10 to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use
-ngl 32 to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
-c 2048 to the desired sequence length for this model. For example,
-c 4096 for a Llama 2 model. For models that use RoPE, add
--rope-freq-base 10000 --rope-freq-scale 0.5 for doubled context, or
--rope-freq-base 10000 --rope-freq-scale 0.25 for 4x context.
If you want to have a chat-style conversation, replace the
-p <PROMPT> argument with
For other parameters and how to use them, please refer to the llama.cpp documentation
Further instructions here: text-generation-webui/docs/llama.cpp.md.
For further support, and discussions on these models and AI in general, join us at:
Thanks to the chirper.ai team!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
Special thanks to: Aemon Algiz.
Patreon special mentions: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 13B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the website and accept our License before requesting access here.
Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
Model Developers Meta
Variations Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
Input Models input text only.
Output Models generate text only.
Model Architecture Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
|Training Data||Params||Content Length||GQA||Tokens||LR|
|Llama 2||A new mix of publicly available online data||7B||4k||✗||2.0T||3.0 x 10-4|
|Llama 2||A new mix of publicly available online data||13B||4k||✗||2.0T||3.0 x 10-4|
|Llama 2||A new mix of publicly available online data||70B||4k||✔||2.0T||1.5 x 10-4|
Llama 2 family of models. Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
Model Dates Llama 2 was trained between January 2023 and July 2023.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: https://ai.meta.com/resources/models-and-libraries/llama-downloads/
Research Paper "Llama-2: Open Foundation and Fine-tuned Chat Models"
Intended Use Cases Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the
EOS tokens, and the whitespaces and breaklines in between (we recommend calling
strip() on inputs to avoid double-spaces). See our reference code in github for details:
Out-of-scope Uses Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
Training Factors We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
Carbon Footprint Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
|Time (GPU hours)||Power Consumption (W)||Carbon Emitted(tCO2eq)|
|Llama 2 7B||184320||400||31.22|
|Llama 2 13B||368640||400||62.44|
|Llama 2 70B||1720320||400||291.42|
CO2 emissions during pretraining. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
Overview Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|Model||Size||Code||Commonsense Reasoning||World Knowledge||Reading Comprehension||Math||MMLU||BBH||AGI Eval|
Overall performance on grouped academic benchmarks. Code: We report the average pass@1 scores of our models on HumanEval and MBPP. Commonsense Reasoning: We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. World Knowledge: We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. Reading Comprehension: For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. MATH: We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
Evaluation of pretrained LLMs on automatic safety benchmarks. For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
Evaluation of fine-tuned LLMs on different safety datasets. Same metric definitions as above.
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
Please see the Responsible Use Guide available at https://ai.meta.com/llama/responsible-use-guide/
Please report any software “bug,” or other problems with the models through one of the following means:
- Reporting issues with the model: github.com/facebookresearch/llama
- Reporting problematic content generated by the model: developers.facebook.com/llama_output_feedback
- Reporting bugs and security concerns: facebook.com/whitehat/info
- Downloads last month
Inference API has been turned off for this model.