Model Details
Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
Model developers Meta
Variations Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
Input Models input text only.
Output Models generate text and code only.
Model Architecture Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
Training Data | Params | Context length | GQA | Token count | Knowledge cutoff | |
Llama 3 | A new mix of publicly available online data. | 8B | 8k | Yes | 15T+ | March, 2023 |
70B | 8k | Yes | December, 2023 |
Llama 3 family of models. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
Model Release Date April 18, 2024.
Status This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
License A custom commercial license is available at: https://llama.meta.com/llama3/license
Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model README. For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go here.
Intended Use
Intended Use Cases Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
Out-of-scope Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**. **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
Getting Started
You can manually download the model using the huggingface-cli (pip install huggingface_hub)
:
huggingface-cli download msimou/Meta-Llama-3-8B-Instruct-GGUF Meta-Llama-3-8B-Instruct-Q8_0.gguf --local-dir .
To run Qwen2, you can use the llama-cli
(the previous main) or llama-server
. We recommend using the llama-server
as it is simple and compatible with OpenAI API. For example:
./llama-server -m ./models/Meta-Llama-3-8B-Instruct-Q8_0.gguf --port 8000 --host 0.0.0.0 -ngl 28 -fa
(Note: -ngl 28 refers to offloading 24 layers to GPUs, and -fa refers to the use of flash attention.)
Then it is easy to access the service with:
curl --location 'http://localhost:8000/v1/chat/completions' \
--header 'Content-Type: application/json' \
--data '{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant"
},
{
"role": "user",
"content": "What is your name?"
}
]
}'
The service is compatible with OpenAI API style.
Training Data
Overview Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
Data Freshness The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively.
Benchmarks
In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see here.
Base pretrained models
Category | Benchmark | Llama 3 8B | Llama2 7B | Llama2 13B | Llama 3 70B | Llama2 70B |
General | MMLU (5-shot) | 66.6 | 45.7 | 53.8 | 79.5 | 69.7 |
AGIEval English (3-5 shot) | 45.9 | 28.8 | 38.7 | 63.0 | 54.8 | |
CommonSenseQA (7-shot) | 72.6 | 57.6 | 67.6 | 83.8 | 78.7 | |
Winogrande (5-shot) | 76.1 | 73.3 | 75.4 | 83.1 | 81.8 | |
BIG-Bench Hard (3-shot, CoT) | 61.1 | 38.1 | 47.0 | 81.3 | 65.7 | |
ARC-Challenge (25-shot) | 78.6 | 53.7 | 67.6 | 93.0 | 85.3 | |
Knowledge reasoning | TriviaQA-Wiki (5-shot) | 78.5 | 72.1 | 79.6 | 89.7 | 87.5 |
Reading comprehension | SQuAD (1-shot) | 76.4 | 72.2 | 72.1 | 85.6 | 82.6 |
QuAC (1-shot, F1) | 44.4 | 39.6 | 44.9 | 51.1 | 49.4 | |
BoolQ (0-shot) | 75.7 | 65.5 | 66.9 | 79.0 | 73.1 | |
DROP (3-shot, F1) | 58.4 | 37.9 | 49.8 | 79.7 | 70.2 |
Instruction tuned models
Benchmark | Llama 3 8B | Llama 2 7B | Llama 2 13B | Llama 3 70B | Llama 2 70B |
MMLU (5-shot) | 68.4 | 34.1 | 47.8 | 82.0 | 52.9 |
GPQA (0-shot) | 34.2 | 21.7 | 22.3 | 39.5 | 21.0 |
HumanEval (0-shot) | 62.2 | 7.9 | 14.0 | 81.7 | 25.6 |
GSM-8K (8-shot, CoT) | 79.6 | 25.7 | 77.4 | 93.0 | 57.5 |
MATH (4-shot, CoT) | 30.0 | 3.8 | 6.7 | 50.4 | 11.6 |
References
- Downloads last month
- 23