MaziyarPanahi commited on
Commit
164dcaa
1 Parent(s): ec0e784

Update README.md (#2)

Browse files

- Update README.md (c3835f815de2ce0ad0efb039ae3d8326cb6d08d8)

Files changed (1) hide show
  1. README.md +559 -161
README.md CHANGED
@@ -1,5 +1,13 @@
1
  ---
 
 
 
2
  tags:
 
 
 
 
 
3
  - quantized
4
  - 2-bit
5
  - 3-bit
@@ -7,224 +15,614 @@ tags:
7
  - 5-bit
8
  - 6-bit
9
  - 8-bit
 
10
  - GGUF
11
- - transformers
12
- - safetensors
13
- - llama
14
- - text-generation
15
- - facebook
16
- - meta
17
- - pytorch
18
- - llama-3
19
- - conversational
20
- - en
21
- - license:other
22
- - autotrain_compatible
23
- - endpoints_compatible
24
- - has_space
25
- - text-generation-inference
26
- - region:us
27
- - text-generation
28
- model_name: Meta-Llama-3-8B-Instruct-GGUF
29
- base_model: meta-llama/Meta-Llama-3-8B-Instruct
30
  inference: false
31
- model_creator: meta-llama
32
- pipeline_tag: text-generation
33
  quantized_by: MaziyarPanahi
 
34
  ---
35
- # [MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF)
36
- - Model creator: [meta-llama](https://huggingface.co/meta-llama)
37
- - Original model: [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)
38
 
39
- ## Description
40
- [MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF) contains GGUF format model files for [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
41
 
42
- ## How to use
43
- Thanks to [TheBloke](https://huggingface.co/TheBloke) for preparing an amazing README on how to use GGUF models:
44
 
45
- ### About GGUF
46
 
47
- GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
 
48
 
49
- Here is an incomplete list of clients and libraries that are known to support GGUF:
 
 
50
 
51
- * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
52
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
53
- * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
54
- * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
55
- * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
56
- * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
57
- * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
58
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
59
- * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
60
- * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
61
 
62
- ### Explanation of quantisation methods
63
 
64
- <details>
65
- <summary>Click to see details</summary>
 
66
 
67
- The new methods available are:
68
 
69
- * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
70
- * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
71
- * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
72
- * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
73
- * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
74
 
75
- ## How to download GGUF files
76
 
77
- **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
78
 
79
- The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
80
 
81
- * LM Studio
82
- * LoLLMS Web UI
83
- * Faraday.dev
84
 
85
- ### In `text-generation-webui`
86
 
87
- Under Download Model, you can enter the model repo: [MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF) and below it, a specific filename to download, such as: Meta-Llama-3-8B-Instruct-GGUF.Q4_K_M.gguf.
88
 
89
- Then click Download.
90
 
91
- ### On the command line, including multiple files at once
92
 
93
- I recommend using the `huggingface-hub` Python library:
94
 
95
- ```shell
96
- pip3 install huggingface-hub
97
- ```
98
 
99
- Then you can download any individual model file to the current directory, at high speed, with a command like this:
100
 
101
- ```shell
102
- huggingface-cli download MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF Meta-Llama-3-8B-Instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
103
- ```
104
- </details>
105
- <details>
106
- <summary>More advanced huggingface-cli download usage (click to read)</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
 
108
- You can also download multiple files at once with a pattern:
109
 
110
- ```shell
111
- huggingface-cli download [MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF) --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
112
- ```
113
 
114
- For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
115
 
116
- To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
117
 
118
- ```shell
119
- pip3 install hf_transfer
120
- ```
 
 
 
 
 
 
 
 
 
 
 
121
 
122
- And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
123
 
124
- ```shell
125
- HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF Meta-Llama-3-8B-Instruct.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
 
 
 
 
 
 
 
 
 
 
 
 
126
  ```
127
 
128
- Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
129
- </details>
130
 
131
- ## Example `llama.cpp` command
132
 
133
- Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
134
 
135
- ```shell
136
- ./main -ngl 35 -m Meta-Llama-3-8B-Instruct.Q4_K_M.gguf --color -c 32768 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system
137
- {system_message}<|im_end|>
138
- <|im_start|>user
139
- {prompt}<|im_end|>
140
- <|im_start|>assistant"
141
  ```
142
 
143
- Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
 
145
- Change `-c 32768` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
146
 
147
- If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
148
 
149
- For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
150
 
151
- ## How to run in `text-generation-webui`
152
 
153
- Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20-%20Model%20Tab.md#llamacpp).
154
 
155
- ## How to run from Python code
156
 
157
- You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
158
 
159
- ### How to load this model in Python code, using llama-cpp-python
160
 
161
- For full documentation, please see: [llama-cpp-python docs](https://github.com/abetlen/llama-cpp-python/).
162
 
163
- #### First install the package
164
 
165
- Run one of the following commands, according to your system:
166
 
167
- ```shell
168
- # Base ctransformers with no GPU acceleration
169
- pip install llama-cpp-python
170
- # With NVidia CUDA acceleration
171
- CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
172
- # Or with OpenBLAS acceleration
173
- CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
174
- # Or with CLBLast acceleration
175
- CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
176
- # Or with AMD ROCm GPU acceleration (Linux only)
177
- CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
178
- # Or with Metal GPU acceleration for macOS systems only
179
- CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
180
 
181
- # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
182
- $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
183
- pip install llama-cpp-python
184
- ```
185
 
186
- #### Simple llama-cpp-python example code
187
 
188
- ```python
189
- from llama_cpp import Llama
190
-
191
- # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
192
- llm = Llama(
193
- model_path="./Meta-Llama-3-8B-Instruct.Q4_K_M.gguf", # Download the model file first
194
- n_ctx=32768, # The max sequence length to use - note that longer sequence lengths require much more resources
195
- n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
196
- n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
197
- )
198
 
199
- # Simple inference example
200
- output = llm(
201
- "<|im_start|>system
202
- {system_message}<|im_end|>
203
- <|im_start|>user
204
- {prompt}<|im_end|>
205
- <|im_start|>assistant", # Prompt
206
- max_tokens=512, # Generate up to 512 tokens
207
- stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
208
- echo=True # Whether to echo the prompt
209
- )
210
 
211
- # Chat Completion API
212
-
213
- llm = Llama(model_path="./Meta-Llama-3-8B-Instruct.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
214
- llm.create_chat_completion(
215
- messages = [
216
- {"role": "system", "content": "You are a story writing assistant."},
217
- {
218
- "role": "user",
219
- "content": "Write a story about llamas."
220
- }
221
- ]
222
- )
223
- ```
224
 
225
- ## How to use with LangChain
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
226
 
227
- Here are guides on using llama-cpp-python and ctransformers with LangChain:
228
 
229
- * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
230
- * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
  tags:
6
+ - facebook
7
+ - meta
8
+ - pytorch
9
+ - llama
10
+ - llama-3
11
  - quantized
12
  - 2-bit
13
  - 3-bit
 
15
  - 5-bit
16
  - 6-bit
17
  - 8-bit
18
+ - 16-bit
19
  - GGUF
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  inference: false
21
+ model_creator: MaziyarPanahi
22
+ model_name: Meta-Llama-3-8B-Instruct-GGUF
23
  quantized_by: MaziyarPanahi
24
+ license_name: llama3
25
  ---
 
 
 
26
 
 
 
27
 
28
+ # MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF
 
29
 
30
+ The GGUF and quantized models here are based on [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) model
31
 
32
+ ## How to download
33
+ You can download only the quants you need instead of cloning the entire repository as follows:
34
 
35
+ ```
36
+ huggingface-cli download MaziyarPanahi/Meta-Llama-3-8B-Instruct-GGUF --local-dir . --include '*Q2_K*gguf'
37
+ ```
38
 
39
+ ## Load GGUF models
 
 
 
 
 
 
 
 
 
40
 
 
41
 
42
+ ```sh
43
+ llama.cpp/main -m Meta-Llama-3-8B-Instruct.Q2_K.gguf -p "Building a website can be done in 10 simple steps:\nStep 1:" -n 1024 -e
44
+ ```
45
 
 
46
 
 
 
 
 
 
47
 
 
48
 
49
+ Original README
50
 
51
+ ---
52
 
53
+ ## Model Details
 
 
54
 
55
+ Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
56
 
57
+ **Model developers** Meta
58
 
59
+ **Variations** Llama 3 comes in two sizes — 8B and 70B parameters — in pre-trained and instruction tuned variants.
60
 
61
+ **Input** Models input text only.
62
 
63
+ **Output** Models generate text and code only.
64
 
65
+ **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
 
 
66
 
 
67
 
68
+ <table>
69
+ <tr>
70
+ <td>
71
+ </td>
72
+ <td><strong>Training Data</strong>
73
+ </td>
74
+ <td><strong>Params</strong>
75
+ </td>
76
+ <td><strong>Context length</strong>
77
+ </td>
78
+ <td><strong>GQA</strong>
79
+ </td>
80
+ <td><strong>Token count</strong>
81
+ </td>
82
+ <td><strong>Knowledge cutoff</strong>
83
+ </td>
84
+ </tr>
85
+ <tr>
86
+ <td rowspan="2" >Llama 3
87
+ </td>
88
+ <td rowspan="2" >A new mix of publicly available online data.
89
+ </td>
90
+ <td>8B
91
+ </td>
92
+ <td>8k
93
+ </td>
94
+ <td>Yes
95
+ </td>
96
+ <td rowspan="2" >15T+
97
+ </td>
98
+ <td>March, 2023
99
+ </td>
100
+ </tr>
101
+ <tr>
102
+ <td>70B
103
+ </td>
104
+ <td>8k
105
+ </td>
106
+ <td>Yes
107
+ </td>
108
+ <td>December, 2023
109
+ </td>
110
+ </tr>
111
+ </table>
112
 
 
113
 
114
+ **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
 
 
115
 
116
+ **Model Release Date** April 18, 2024.
117
 
118
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
119
 
120
+ **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
121
+
122
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
123
+
124
+
125
+ ## Intended Use
126
+
127
+ **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
128
+
129
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
130
+
131
+ **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
132
+
133
+ ## How to use
134
 
135
+ This repository contains two versions of Meta-Llama-3-70B-Instruct, for use with transformers and with the original `llama3` codebase.
136
+
137
+ ### Use with transformers
138
+
139
+ See the snippet below for usage with Transformers:
140
+
141
+ ```python
142
+ import transformers
143
+ import torch
144
+
145
+ model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
146
+
147
+ pipeline = transformers.pipeline(
148
+ "text-generation",
149
+ model=model_id,
150
+ model_kwargs={"torch_dtype": torch.bfloat16},
151
+ device="cuda",
152
+ )
153
+
154
+ messages = [
155
+ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
156
+ {"role": "user", "content": "Who are you?"},
157
+ ]
158
+
159
+ prompt = pipeline.tokenizer.apply_chat_template(
160
+ messages,
161
+ tokenize=False,
162
+ add_generation_prompt=True
163
+ )
164
 
165
+ terminators = [
166
+ tokenizer.eos_token_id,
167
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
168
+ ]
169
+
170
+ outputs = pipeline(
171
+ prompt,
172
+ max_new_tokens=256,
173
+ eos_token_id=terminators,
174
+ do_sample=True,
175
+ temperature=0.6,
176
+ top_p=0.9,
177
+ )
178
+ print(outputs[0]["generated_text"][len(prompt):])
179
  ```
180
 
181
+ ### Use with `llama3`
 
182
 
183
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
184
 
185
+ To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
186
 
187
+ ```
188
+ huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
 
 
 
 
189
  ```
190
 
191
+ For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
192
+
193
+ ## Hardware and Software
194
+
195
+ **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
196
+
197
+ **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
198
+
199
+
200
+ <table>
201
+ <tr>
202
+ <td>
203
+ </td>
204
+ <td><strong>Time (GPU hours)</strong>
205
+ </td>
206
+ <td><strong>Power Consumption (W)</strong>
207
+ </td>
208
+ <td><strong>Carbon Emitted(tCO2eq)</strong>
209
+ </td>
210
+ </tr>
211
+ <tr>
212
+ <td>Llama 3 8B
213
+ </td>
214
+ <td>1.3M
215
+ </td>
216
+ <td>700
217
+ </td>
218
+ <td>390
219
+ </td>
220
+ </tr>
221
+ <tr>
222
+ <td>Llama 3 70B
223
+ </td>
224
+ <td>6.4M
225
+ </td>
226
+ <td>700
227
+ </td>
228
+ <td>1900
229
+ </td>
230
+ </tr>
231
+ <tr>
232
+ <td>Total
233
+ </td>
234
+ <td>7.7M
235
+ </td>
236
+ <td>
237
+ </td>
238
+ <td>2290
239
+ </td>
240
+ </tr>
241
+ </table>
242
+
243
+
244
+
245
+ **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
246
+
247
+
248
+ ## Training Data
249
+
250
+ **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
251
+
252
+ **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
253
+
254
+
255
+ ## Benchmarks
256
+
257
+ In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
258
+
259
+
260
+ ### Base pretrained models
261
+
262
+
263
+ <table>
264
+ <tr>
265
+ <td><strong>Category</strong>
266
+ </td>
267
+ <td><strong>Benchmark</strong>
268
+ </td>
269
+ <td><strong>Llama 3 8B</strong>
270
+ </td>
271
+ <td><strong>Llama2 7B</strong>
272
+ </td>
273
+ <td><strong>Llama2 13B</strong>
274
+ </td>
275
+ <td><strong>Llama 3 70B</strong>
276
+ </td>
277
+ <td><strong>Llama2 70B</strong>
278
+ </td>
279
+ </tr>
280
+ <tr>
281
+ <td rowspan="6" >General
282
+ </td>
283
+ <td>MMLU (5-shot)
284
+ </td>
285
+ <td>66.6
286
+ </td>
287
+ <td>45.7
288
+ </td>
289
+ <td>53.8
290
+ </td>
291
+ <td>79.5
292
+ </td>
293
+ <td>69.7
294
+ </td>
295
+ </tr>
296
+ <tr>
297
+ <td>AGIEval English (3-5 shot)
298
+ </td>
299
+ <td>45.9
300
+ </td>
301
+ <td>28.8
302
+ </td>
303
+ <td>38.7
304
+ </td>
305
+ <td>63.0
306
+ </td>
307
+ <td>54.8
308
+ </td>
309
+ </tr>
310
+ <tr>
311
+ <td>CommonSenseQA (7-shot)
312
+ </td>
313
+ <td>72.6
314
+ </td>
315
+ <td>57.6
316
+ </td>
317
+ <td>67.6
318
+ </td>
319
+ <td>83.8
320
+ </td>
321
+ <td>78.7
322
+ </td>
323
+ </tr>
324
+ <tr>
325
+ <td>Winogrande (5-shot)
326
+ </td>
327
+ <td>76.1
328
+ </td>
329
+ <td>73.3
330
+ </td>
331
+ <td>75.4
332
+ </td>
333
+ <td>83.1
334
+ </td>
335
+ <td>81.8
336
+ </td>
337
+ </tr>
338
+ <tr>
339
+ <td>BIG-Bench Hard (3-shot, CoT)
340
+ </td>
341
+ <td>61.1
342
+ </td>
343
+ <td>38.1
344
+ </td>
345
+ <td>47.0
346
+ </td>
347
+ <td>81.3
348
+ </td>
349
+ <td>65.7
350
+ </td>
351
+ </tr>
352
+ <tr>
353
+ <td>ARC-Challenge (25-shot)
354
+ </td>
355
+ <td>78.6
356
+ </td>
357
+ <td>53.7
358
+ </td>
359
+ <td>67.6
360
+ </td>
361
+ <td>93.0
362
+ </td>
363
+ <td>85.3
364
+ </td>
365
+ </tr>
366
+ <tr>
367
+ <td>Knowledge reasoning
368
+ </td>
369
+ <td>TriviaQA-Wiki (5-shot)
370
+ </td>
371
+ <td>78.5
372
+ </td>
373
+ <td>72.1
374
+ </td>
375
+ <td>79.6
376
+ </td>
377
+ <td>89.7
378
+ </td>
379
+ <td>87.5
380
+ </td>
381
+ </tr>
382
+ <tr>
383
+ <td rowspan="4" >Reading comprehension
384
+ </td>
385
+ <td>SQuAD (1-shot)
386
+ </td>
387
+ <td>76.4
388
+ </td>
389
+ <td>72.2
390
+ </td>
391
+ <td>72.1
392
+ </td>
393
+ <td>85.6
394
+ </td>
395
+ <td>82.6
396
+ </td>
397
+ </tr>
398
+ <tr>
399
+ <td>QuAC (1-shot, F1)
400
+ </td>
401
+ <td>44.4
402
+ </td>
403
+ <td>39.6
404
+ </td>
405
+ <td>44.9
406
+ </td>
407
+ <td>51.1
408
+ </td>
409
+ <td>49.4
410
+ </td>
411
+ </tr>
412
+ <tr>
413
+ <td>BoolQ (0-shot)
414
+ </td>
415
+ <td>75.7
416
+ </td>
417
+ <td>65.5
418
+ </td>
419
+ <td>66.9
420
+ </td>
421
+ <td>79.0
422
+ </td>
423
+ <td>73.1
424
+ </td>
425
+ </tr>
426
+ <tr>
427
+ <td>DROP (3-shot, F1)
428
+ </td>
429
+ <td>58.4
430
+ </td>
431
+ <td>37.9
432
+ </td>
433
+ <td>49.8
434
+ </td>
435
+ <td>79.7
436
+ </td>
437
+ <td>70.2
438
+ </td>
439
+ </tr>
440
+ </table>
441
+
442
+
443
+
444
+ ### Instruction tuned models
445
+
446
+
447
+ <table>
448
+ <tr>
449
+ <td><strong>Benchmark</strong>
450
+ </td>
451
+ <td><strong>Llama 3 8B</strong>
452
+ </td>
453
+ <td><strong>Llama 2 7B</strong>
454
+ </td>
455
+ <td><strong>Llama 2 13B</strong>
456
+ </td>
457
+ <td><strong>Llama 3 70B</strong>
458
+ </td>
459
+ <td><strong>Llama 2 70B</strong>
460
+ </td>
461
+ </tr>
462
+ <tr>
463
+ <td>MMLU (5-shot)
464
+ </td>
465
+ <td>68.4
466
+ </td>
467
+ <td>34.1
468
+ </td>
469
+ <td>47.8
470
+ </td>
471
+ <td>82.0
472
+ </td>
473
+ <td>52.9
474
+ </td>
475
+ </tr>
476
+ <tr>
477
+ <td>GPQA (0-shot)
478
+ </td>
479
+ <td>34.2
480
+ </td>
481
+ <td>21.7
482
+ </td>
483
+ <td>22.3
484
+ </td>
485
+ <td>39.5
486
+ </td>
487
+ <td>21.0
488
+ </td>
489
+ </tr>
490
+ <tr>
491
+ <td>HumanEval (0-shot)
492
+ </td>
493
+ <td>62.2
494
+ </td>
495
+ <td>7.9
496
+ </td>
497
+ <td>14.0
498
+ </td>
499
+ <td>81.7
500
+ </td>
501
+ <td>25.6
502
+ </td>
503
+ </tr>
504
+ <tr>
505
+ <td>GSM-8K (8-shot, CoT)
506
+ </td>
507
+ <td>79.6
508
+ </td>
509
+ <td>25.7
510
+ </td>
511
+ <td>77.4
512
+ </td>
513
+ <td>93.0
514
+ </td>
515
+ <td>57.5
516
+ </td>
517
+ </tr>
518
+ <tr>
519
+ <td>MATH (4-shot, CoT)
520
+ </td>
521
+ <td>30.0
522
+ </td>
523
+ <td>3.8
524
+ </td>
525
+ <td>6.7
526
+ </td>
527
+ <td>50.4
528
+ </td>
529
+ <td>11.6
530
+ </td>
531
+ </tr>
532
+ </table>
533
 
 
534
 
 
535
 
536
+ ### Responsibility & Safety
537
 
538
+ We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
539
 
540
+ Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
541
 
542
+ Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
543
 
 
544
 
545
+ As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
546
 
 
547
 
548
+ #### Llama 3-Instruct
549
 
550
+ As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
551
 
552
+ <span style="text-decoration:underline;">Safety</span>
 
 
 
 
 
 
 
 
 
 
 
 
553
 
554
+ For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
 
 
 
555
 
556
+ <span style="text-decoration:underline;">Refusals</span>
557
 
558
+ In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
 
 
 
 
 
 
 
 
 
559
 
560
+ We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
 
 
 
 
 
 
 
 
 
 
561
 
 
 
 
 
 
 
 
 
 
 
 
 
 
562
 
563
+ #### Responsible release
564
+
565
+ In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
566
+
567
+ Misuse
568
+
569
+ If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
570
+
571
+
572
+ #### Critical risks
573
+
574
+ <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
575
+
576
+ We have conducted a two fold assessment of the safety of the model in this area:
577
+
578
+
579
+
580
+ * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
581
+ * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
582
+
583
+
584
+ ### <span style="text-decoration:underline;">Cyber Security </span>
585
+
586
+ We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
587
+
588
 
589
+ ### <span style="text-decoration:underline;">Child Safety</span>
590
 
591
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
592
+
593
+
594
+ ### Community
595
+
596
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
597
+
598
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
599
+
600
+
601
+ ## Ethical Considerations and Limitations
602
+
603
+ The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
604
+
605
+ But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
606
+
607
+ Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
608
+
609
+
610
+ ## Citation instructions
611
+
612
+ @article{llama3modelcard,
613
+
614
+ title={Llama 3 Model Card},
615
+
616
+ author={AI@Meta},
617
+
618
+ year={2024},
619
+
620
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
621
+
622
+ }
623
+
624
+ ## Contributors
625
+
626
+ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos
627
+
628
+ ---