RonanMcGovern commited on
Commit
6abe5b6
1 Parent(s): 5cf5792

update prompt format

Browse files
Files changed (1) hide show
  1. README.md +535 -109
README.md CHANGED
@@ -36,12 +36,6 @@ Complete inference scripts are available for purchase [here](https://trelis.com/
36
  - Automate catching, handling and chaining of function calls.
37
 
38
  ## Prompt Format
39
- ```
40
- B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n"
41
- B_INST, E_INST = "[INST] ", " [/INST]" #Llama style
42
- prompt = f"{B_INST}{B_FUNC}{functionList.strip()}{E_FUNC}{user_prompt.strip()}{E_INST}\n\n"
43
- ```
44
-
45
  ### Using tokenizer.apply_chat_template
46
  For an easier application of the prompt, you can set up as follows:
47
 
@@ -121,7 +115,7 @@ with `FUNCTION_METADATA` as:
121
  ```
122
  and then apply the chat template to get a formatted prompt:
123
  ```
124
- tokenizer = AutoTokenizer.from_pretrained('Trelis/Llama-2-7b-chat-hf-function-calling-v3', trust_remote_code=True)
125
 
126
  prompt = tokenizer.apply_chat_template(prompt, tokenize=False)
127
  ```
@@ -133,28 +127,27 @@ huggingface-cli login
133
 
134
  ### Manual Prompt:
135
  ```
136
- [INST] You have access to the following functions. Use them if required:
137
 
138
  [
139
  {
140
  "type": "function",
141
  "function": {
142
- "name": "get_big_stocks",
143
- "description": "Get the names of the largest N stocks by market cap",
144
  "parameters": {
145
  "type": "object",
146
  "properties": {
147
- "number": {
148
- "type": "integer",
149
- "description": "The number of largest stocks to get the names of, e.g. 25"
150
- },
151
- "region": {
152
- "type": "string",
153
- "description": "The region to consider, can be \"US\" or \"World\"."
154
  }
155
  },
156
  "required": [
157
- "number"
158
  ]
159
  }
160
  }
@@ -162,36 +155,38 @@ huggingface-cli login
162
  {
163
  "type": "function",
164
  "function": {
165
- "name": "get_stock_price",
166
- "description": "Get the stock price of an array of stocks",
167
  "parameters": {
168
  "type": "object",
169
  "properties": {
170
- "names": {
171
- "type": "array",
172
- "items": {
173
- "type": "string"
174
- },
175
- "description": "An array of stocks"
 
176
  }
177
  },
178
  "required": [
179
- "names"
180
  ]
181
  }
182
  }
183
  }
184
- ]
185
 
186
- [INST] Get the names of the five largest stocks in the US by market cap [/INST]
187
 
 
188
  {
189
  "name": "get_big_stocks",
190
  "arguments": {
191
  "number": 5,
192
  "region": "US"
193
  }
194
- }</s>
195
  ```
196
 
197
  # Dataset
@@ -200,118 +195,549 @@ See [Trelis/function_calling_v3](https://huggingface.co/datasets/Trelis/function
200
  ~~~
201
  The original repo card follows below.
202
  ~~~
203
- # **Llama 2**
204
- Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
205
-
206
  ## Model Details
207
- *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
208
 
209
- Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
210
 
211
- **Model Developers** Meta
212
 
213
- **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
214
 
215
  **Input** Models input text only.
216
 
217
- **Output** Models generate text only.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
218
 
219
- **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
220
 
 
221
 
222
- ||Training Data|Params|Content Length|GQA|Tokens|LR|
223
- |---|---|---|---|---|---|---|
224
- |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
225
- |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
226
- |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>|
227
 
228
- *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
229
 
230
- **Model Dates** Llama 2 was trained between January 2023 and July 2023.
231
 
232
- **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
233
 
234
- **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
235
 
236
- **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
237
 
238
- ## Intended Use
239
- **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
240
 
241
- To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
242
 
243
- **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
244
 
245
- ## Hardware and Software
246
- **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
 
 
 
 
 
 
 
 
 
 
 
247
 
248
- **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
249
 
250
- ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
251
- |---|---|---|---|
252
- |Llama 2 7B|184320|400|31.22|
253
- |Llama 2 13B|368640|400|62.44|
254
- |Llama 2 70B|1720320|400|291.42|
255
- |Total|3311616||539.00|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
256
 
257
- **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
258
 
259
  ## Training Data
260
- **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
261
 
262
- **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
263
 
264
- ## Evaluation Results
265
 
266
- In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
267
 
268
- |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
269
- |---|---|---|---|---|---|---|---|---|---|
270
- |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
271
- |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
272
- |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
273
- |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
274
- |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
275
- |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
276
- |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
277
 
278
- **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
279
 
280
- |||TruthfulQA|Toxigen|
281
- |---|---|---|---|
282
- |Llama 1|7B|27.42|23.00|
283
- |Llama 1|13B|41.74|23.08|
284
- |Llama 1|33B|44.19|22.57|
285
- |Llama 1|65B|48.71|21.77|
286
- |Llama 2|7B|33.29|**21.25**|
287
- |Llama 2|13B|41.86|26.10|
288
- |Llama 2|70B|**50.18**|24.60|
289
 
290
- **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
291
 
 
292
 
293
- |||TruthfulQA|Toxigen|
294
- |---|---|---|---|
295
- |Llama-2-Chat|7B|57.04|**0.00**|
296
- |Llama-2-Chat|13B|62.18|**0.00**|
297
- |Llama-2-Chat|70B|**64.14**|0.01|
298
 
299
- **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
300
 
301
  ## Ethical Considerations and Limitations
302
- Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
303
-
304
- Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
305
-
306
- ## Reporting Issues
307
- Please report any software “bug,” or other problems with the models through one of the following means:
308
- - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
309
- - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
310
- - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
311
-
312
- ## Llama Model Index
313
- |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
314
- |---|---|---|---|---|
315
- |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
316
- |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
317
- |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|
 
 
 
 
 
 
 
 
 
 
36
  - Automate catching, handling and chaining of function calls.
37
 
38
  ## Prompt Format
 
 
 
 
 
 
39
  ### Using tokenizer.apply_chat_template
40
  For an easier application of the prompt, you can set up as follows:
41
 
 
115
  ```
116
  and then apply the chat template to get a formatted prompt:
117
  ```
118
+ tokenizer = AutoTokenizer.from_pretrained('Trelis/Meta-Llama-3-8B-Instruct-function-calling', trust_remote_code=True)
119
 
120
  prompt = tokenizer.apply_chat_template(prompt, tokenize=False)
121
  ```
 
127
 
128
  ### Manual Prompt:
129
  ```
130
+ <|begin_of_text|><|start_header_id|>function_metadata<|end_header_id|>
131
 
132
  [
133
  {
134
  "type": "function",
135
  "function": {
136
+ "name": "get_stock_price",
137
+ "description": "Get the stock price of an array of stocks",
138
  "parameters": {
139
  "type": "object",
140
  "properties": {
141
+ "names": {
142
+ "type": "array",
143
+ "items": {
144
+ "type": "string"
145
+ },
146
+ "description": "An array of stocks"
 
147
  }
148
  },
149
  "required": [
150
+ "names"
151
  ]
152
  }
153
  }
 
155
  {
156
  "type": "function",
157
  "function": {
158
+ "name": "get_big_stocks",
159
+ "description": "Get the names of the largest N stocks by market cap",
160
  "parameters": {
161
  "type": "object",
162
  "properties": {
163
+ "number": {
164
+ "type": "integer",
165
+ "description": "The number of largest stocks to get the names of, e.g. 25"
166
+ },
167
+ "region": {
168
+ "type": "string",
169
+ "description": "The region to consider, can be \"US\" or \"World\"."
170
  }
171
  },
172
  "required": [
173
+ "number"
174
  ]
175
  }
176
  }
177
  }
178
+ ]<|eot_id|><|start_header_id|>user<|end_header_id|>
179
 
180
+ Get the names of the five largest stocks by market cap<|eot_id|><|start_header_id|>assistant<|end_header_id|>
181
 
182
+ Generated Response:
183
  {
184
  "name": "get_big_stocks",
185
  "arguments": {
186
  "number": 5,
187
  "region": "US"
188
  }
189
+ }<|eot_id|>
190
  ```
191
 
192
  # Dataset
 
195
  ~~~
196
  The original repo card follows below.
197
  ~~~
 
 
 
198
  ## Model Details
 
199
 
200
+ Meta developed and released the Meta Llama 3 family of large language models (LLMs), a collection of pretrained and instruction tuned generative text models in 8 and 70B sizes. The Llama 3 instruction tuned models are optimized for dialogue use cases and outperform many of the available open source chat models on common industry benchmarks. Further, in developing these models, we took great care to optimize helpfulness and safety.
201
 
202
+ **Model developers** Meta
203
 
204
+ **Variations** Llama 3 comes in two sizes — 8B and 70B parameters in pre-trained and instruction tuned variants.
205
 
206
  **Input** Models input text only.
207
 
208
+ **Output** Models generate text and code only.
209
+
210
+ **Model Architecture** Llama 3 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety.
211
+
212
+
213
+ <table>
214
+ <tr>
215
+ <td>
216
+ </td>
217
+ <td><strong>Training Data</strong>
218
+ </td>
219
+ <td><strong>Params</strong>
220
+ </td>
221
+ <td><strong>Context length</strong>
222
+ </td>
223
+ <td><strong>GQA</strong>
224
+ </td>
225
+ <td><strong>Token count</strong>
226
+ </td>
227
+ <td><strong>Knowledge cutoff</strong>
228
+ </td>
229
+ </tr>
230
+ <tr>
231
+ <td rowspan="2" >Llama 3
232
+ </td>
233
+ <td rowspan="2" >A new mix of publicly available online data.
234
+ </td>
235
+ <td>8B
236
+ </td>
237
+ <td>8k
238
+ </td>
239
+ <td>Yes
240
+ </td>
241
+ <td rowspan="2" >15T+
242
+ </td>
243
+ <td>March, 2023
244
+ </td>
245
+ </tr>
246
+ <tr>
247
+ <td>70B
248
+ </td>
249
+ <td>8k
250
+ </td>
251
+ <td>Yes
252
+ </td>
253
+ <td>December, 2023
254
+ </td>
255
+ </tr>
256
+ </table>
257
+
258
+
259
+ **Llama 3 family of models**. Token counts refer to pretraining data only. Both the 8 and 70B versions use Grouped-Query Attention (GQA) for improved inference scalability.
260
+
261
+ **Model Release Date** April 18, 2024.
262
 
263
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
264
 
265
+ **License** A custom commercial license is available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
266
 
267
+ Where to send questions or comments about the model Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta-llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3 in applications, please go [here](https://github.com/meta-llama/llama-recipes).
 
 
 
 
268
 
 
269
 
270
+ ## Intended Use
271
 
272
+ **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
273
 
274
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
275
 
276
+ **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.
277
 
278
+ ## How to use
 
279
 
280
+ This repository contains two versions of Meta-Llama-3-8B, for use with transformers and with the original `llama3` codebase.
281
 
282
+ ### Use with transformers
283
 
284
+ See the snippet below for usage with Transformers:
285
+
286
+ ```python
287
+ >>> import transformers
288
+ >>> import torch
289
+
290
+ >>> model_id = "meta-llama/Meta-Llama-3-8B"
291
+
292
+ >>> pipeline = transformers.pipeline(
293
+ "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto"
294
+ )
295
+ >>> pipeline("Hey how are you doing today?")
296
+ ```
297
 
298
+ ### Use with `llama3`
299
 
300
+ Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3).
301
+
302
+ To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
303
+
304
+ ```
305
+ huggingface-cli download meta-llama/Meta-Llama-3-8B --include "original/*" --local-dir Meta-Llama-3-8B
306
+ ```
307
+
308
+ For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
309
+
310
+ ## Hardware and Software
311
+
312
+ **Training Factors** We used custom training libraries, Meta's Research SuperCluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
313
+
314
+ **Carbon Footprint Pretraining utilized a cumulative** 7.7M GPU hours of computation on hardware of type H100-80GB (TDP of 700W). Estimated total emissions were 2290 tCO2eq, 100% of which were offset by Meta’s sustainability program.
315
+
316
+
317
+ <table>
318
+ <tr>
319
+ <td>
320
+ </td>
321
+ <td><strong>Time (GPU hours)</strong>
322
+ </td>
323
+ <td><strong>Power Consumption (W)</strong>
324
+ </td>
325
+ <td><strong>Carbon Emitted(tCO2eq)</strong>
326
+ </td>
327
+ </tr>
328
+ <tr>
329
+ <td>Llama 3 8B
330
+ </td>
331
+ <td>1.3M
332
+ </td>
333
+ <td>700
334
+ </td>
335
+ <td>390
336
+ </td>
337
+ </tr>
338
+ <tr>
339
+ <td>Llama 3 70B
340
+ </td>
341
+ <td>6.4M
342
+ </td>
343
+ <td>700
344
+ </td>
345
+ <td>1900
346
+ </td>
347
+ </tr>
348
+ <tr>
349
+ <td>Total
350
+ </td>
351
+ <td>7.7M
352
+ </td>
353
+ <td>
354
+ </td>
355
+ <td>2290
356
+ </td>
357
+ </tr>
358
+ </table>
359
+
360
+
361
+
362
+ **CO2 emissions during pre-training**. Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
363
 
 
364
 
365
  ## Training Data
 
366
 
367
+ **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
368
+
369
+ **Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively.
370
+
371
+
372
+ ## Benchmarks
373
+
374
+ In this section, we report the results for Llama 3 models on standard automatic benchmarks. For all the evaluations, we use our internal evaluations library. For details on the methodology see [here](https://github.com/meta-llama/llama3/blob/main/eval_methodology.md).
375
+
376
+
377
+ ### Base pretrained models
378
+
379
+
380
+ <table>
381
+ <tr>
382
+ <td><strong>Category</strong>
383
+ </td>
384
+ <td><strong>Benchmark</strong>
385
+ </td>
386
+ <td><strong>Llama 3 8B</strong>
387
+ </td>
388
+ <td><strong>Llama2 7B</strong>
389
+ </td>
390
+ <td><strong>Llama2 13B</strong>
391
+ </td>
392
+ <td><strong>Llama 3 70B</strong>
393
+ </td>
394
+ <td><strong>Llama2 70B</strong>
395
+ </td>
396
+ </tr>
397
+ <tr>
398
+ <td rowspan="6" >General
399
+ </td>
400
+ <td>MMLU (5-shot)
401
+ </td>
402
+ <td>66.6
403
+ </td>
404
+ <td>45.7
405
+ </td>
406
+ <td>53.8
407
+ </td>
408
+ <td>79.5
409
+ </td>
410
+ <td>69.7
411
+ </td>
412
+ </tr>
413
+ <tr>
414
+ <td>AGIEval English (3-5 shot)
415
+ </td>
416
+ <td>45.9
417
+ </td>
418
+ <td>28.8
419
+ </td>
420
+ <td>38.7
421
+ </td>
422
+ <td>63.0
423
+ </td>
424
+ <td>54.8
425
+ </td>
426
+ </tr>
427
+ <tr>
428
+ <td>CommonSenseQA (7-shot)
429
+ </td>
430
+ <td>72.6
431
+ </td>
432
+ <td>57.6
433
+ </td>
434
+ <td>67.6
435
+ </td>
436
+ <td>83.8
437
+ </td>
438
+ <td>78.7
439
+ </td>
440
+ </tr>
441
+ <tr>
442
+ <td>Winogrande (5-shot)
443
+ </td>
444
+ <td>76.1
445
+ </td>
446
+ <td>73.3
447
+ </td>
448
+ <td>75.4
449
+ </td>
450
+ <td>83.1
451
+ </td>
452
+ <td>81.8
453
+ </td>
454
+ </tr>
455
+ <tr>
456
+ <td>BIG-Bench Hard (3-shot, CoT)
457
+ </td>
458
+ <td>61.1
459
+ </td>
460
+ <td>38.1
461
+ </td>
462
+ <td>47.0
463
+ </td>
464
+ <td>81.3
465
+ </td>
466
+ <td>65.7
467
+ </td>
468
+ </tr>
469
+ <tr>
470
+ <td>ARC-Challenge (25-shot)
471
+ </td>
472
+ <td>78.6
473
+ </td>
474
+ <td>53.7
475
+ </td>
476
+ <td>67.6
477
+ </td>
478
+ <td>93.0
479
+ </td>
480
+ <td>85.3
481
+ </td>
482
+ </tr>
483
+ <tr>
484
+ <td>Knowledge reasoning
485
+ </td>
486
+ <td>TriviaQA-Wiki (5-shot)
487
+ </td>
488
+ <td>78.5
489
+ </td>
490
+ <td>72.1
491
+ </td>
492
+ <td>79.6
493
+ </td>
494
+ <td>89.7
495
+ </td>
496
+ <td>87.5
497
+ </td>
498
+ </tr>
499
+ <tr>
500
+ <td rowspan="4" >Reading comprehension
501
+ </td>
502
+ <td>SQuAD (1-shot)
503
+ </td>
504
+ <td>76.4
505
+ </td>
506
+ <td>72.2
507
+ </td>
508
+ <td>72.1
509
+ </td>
510
+ <td>85.6
511
+ </td>
512
+ <td>82.6
513
+ </td>
514
+ </tr>
515
+ <tr>
516
+ <td>QuAC (1-shot, F1)
517
+ </td>
518
+ <td>44.4
519
+ </td>
520
+ <td>39.6
521
+ </td>
522
+ <td>44.9
523
+ </td>
524
+ <td>51.1
525
+ </td>
526
+ <td>49.4
527
+ </td>
528
+ </tr>
529
+ <tr>
530
+ <td>BoolQ (0-shot)
531
+ </td>
532
+ <td>75.7
533
+ </td>
534
+ <td>65.5
535
+ </td>
536
+ <td>66.9
537
+ </td>
538
+ <td>79.0
539
+ </td>
540
+ <td>73.1
541
+ </td>
542
+ </tr>
543
+ <tr>
544
+ <td>DROP (3-shot, F1)
545
+ </td>
546
+ <td>58.4
547
+ </td>
548
+ <td>37.9
549
+ </td>
550
+ <td>49.8
551
+ </td>
552
+ <td>79.7
553
+ </td>
554
+ <td>70.2
555
+ </td>
556
+ </tr>
557
+ </table>
558
+
559
+
560
+
561
+ ### Instruction tuned models
562
+
563
+
564
+ <table>
565
+ <tr>
566
+ <td><strong>Benchmark</strong>
567
+ </td>
568
+ <td><strong>Llama 3 8B</strong>
569
+ </td>
570
+ <td><strong>Llama 2 7B</strong>
571
+ </td>
572
+ <td><strong>Llama 2 13B</strong>
573
+ </td>
574
+ <td><strong>Llama 3 70B</strong>
575
+ </td>
576
+ <td><strong>Llama 2 70B</strong>
577
+ </td>
578
+ </tr>
579
+ <tr>
580
+ <td>MMLU (5-shot)
581
+ </td>
582
+ <td>68.4
583
+ </td>
584
+ <td>34.1
585
+ </td>
586
+ <td>47.8
587
+ </td>
588
+ <td>82.0
589
+ </td>
590
+ <td>52.9
591
+ </td>
592
+ </tr>
593
+ <tr>
594
+ <td>GPQA (0-shot)
595
+ </td>
596
+ <td>34.2
597
+ </td>
598
+ <td>21.7
599
+ </td>
600
+ <td>22.3
601
+ </td>
602
+ <td>39.5
603
+ </td>
604
+ <td>21.0
605
+ </td>
606
+ </tr>
607
+ <tr>
608
+ <td>HumanEval (0-shot)
609
+ </td>
610
+ <td>62.2
611
+ </td>
612
+ <td>7.9
613
+ </td>
614
+ <td>14.0
615
+ </td>
616
+ <td>81.7
617
+ </td>
618
+ <td>25.6
619
+ </td>
620
+ </tr>
621
+ <tr>
622
+ <td>GSM-8K (8-shot, CoT)
623
+ </td>
624
+ <td>79.6
625
+ </td>
626
+ <td>25.7
627
+ </td>
628
+ <td>77.4
629
+ </td>
630
+ <td>93.0
631
+ </td>
632
+ <td>57.5
633
+ </td>
634
+ </tr>
635
+ <tr>
636
+ <td>MATH (4-shot, CoT)
637
+ </td>
638
+ <td>30.0
639
+ </td>
640
+ <td>3.8
641
+ </td>
642
+ <td>6.7
643
+ </td>
644
+ <td>50.4
645
+ </td>
646
+ <td>11.6
647
+ </td>
648
+ </tr>
649
+ </table>
650
+
651
+
652
+
653
+ ### Responsibility & Safety
654
+
655
+ We believe that an open approach to AI leads to better, safer products, faster innovation, and a bigger overall market. We are committed to Responsible AI development and took a series of steps to limit misuse and harm and support the open source community.
656
+
657
+ Foundation models are widely capable technologies that are built to be used for a diverse range of applications. They are not designed to meet every developer preference on safety levels for all use cases, out-of-the-box, as those by their nature will differ across different applications.
658
+
659
+ Rather, responsible LLM-application deployment is achieved by implementing a series of safety best practices throughout the development of such applications, from the model pre-training, fine-tuning and the deployment of systems composed of safeguards to tailor the safety needs specifically to the use case and audience.
660
+
661
+
662
+ As part of the Llama 3 release, we updated our [Responsible Use Guide](https://llama.meta.com/responsible-use-guide/) to outline the steps and best practices for developers to implement model and system level safety for their application. We also provide a set of resources including [Meta Llama Guard 2](https://llama.meta.com/purple-llama/) and [Code Shield](https://llama.meta.com/purple-llama/) safeguards. These tools have proven to drastically reduce residual risks of LLM Systems, while maintaining a high level of helpfulness. We encourage developers to tune and deploy these safeguards according to their needs and we provide a [reference implementation](https://github.com/meta-llama/llama-recipes/tree/main/recipes/responsible_ai) to get you started.
663
+
664
+
665
+ #### Llama 3-Instruct
666
+
667
+ As outlined in the Responsible Use Guide, some trade-off between model helpfulness and model alignment is likely unavoidable. Developers should exercise discretion about how to weigh the benefits of alignment and helpfulness for their specific use case and audience. Developers should be mindful of residual risks when using Llama models and leverage additional safety tools as needed to reach the right safety bar for their use case.
668
+
669
+ <span style="text-decoration:underline;">Safety</span>
670
+
671
+ For our instruction tuned model, we conducted extensive red teaming exercises, performed adversarial evaluations and implemented safety mitigations techniques to lower residual risks. As with any Large Language Model, residual risks will likely remain and we recommend that developers assess these risks in the context of their use case. In parallel, we are working with the community to make AI safety benchmark standards transparent, rigorous and interpretable.
672
+
673
+ <span style="text-decoration:underline;">Refusals</span>
674
+
675
+ In addition to residual risks, we put a great emphasis on model refusals to benign prompts. Over-refusing not only can impact the user experience but could even be harmful in certain contexts as well. We’ve heard the feedback from the developer community and improved our fine tuning to ensure that Llama 3 is significantly less likely to falsely refuse to answer prompts than Llama 2.
676
+
677
+ We built internal benchmarks and developed mitigations to limit false refusals making Llama 3 our most helpful model to date.
678
+
679
+
680
+ #### Responsible release
681
 
682
+ In addition to responsible use considerations outlined above, we followed a rigorous process that requires us to take extra measures against misuse and critical risks before we make our release decision.
683
 
684
+ Misuse
685
 
686
+ If you access or use Llama 3, you agree to the Acceptable Use Policy. The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy/](https://llama.meta.com/llama3/use-policy/).
 
 
 
 
 
 
 
 
687
 
 
688
 
689
+ #### Critical risks
 
 
 
 
 
 
 
 
690
 
691
+ <span style="text-decoration:underline;">CBRNE</span> (Chemical, Biological, Radiological, Nuclear, and high yield Explosives)
692
 
693
+ We have conducted a two fold assessment of the safety of the model in this area:
694
 
 
 
 
 
 
695
 
696
+
697
+ * Iterative testing during model training to assess the safety of responses related to CBRNE threats and other adversarial risks.
698
+ * Involving external CBRNE experts to conduct an uplift test assessing the ability of the model to accurately provide expert knowledge and reduce barriers to potential CBRNE misuse, by reference to what can be achieved using web search (without the model).
699
+
700
+
701
+ ### <span style="text-decoration:underline;">Cyber Security </span>
702
+
703
+ We have evaluated Llama 3 with CyberSecEval, Meta’s cybersecurity safety eval suite, measuring Llama 3’s propensity to suggest insecure code when used as a coding assistant, and Llama 3’s propensity to comply with requests to help carry out cyber attacks, where attacks are defined by the industry standard MITRE ATT&CK cyber attack ontology. On our insecure coding and cyber attacker helpfulness tests, Llama 3 behaved in the same range or safer than models of [equivalent coding capability](https://huggingface.co/spaces/facebook/CyberSecEval).
704
+
705
+
706
+ ### <span style="text-decoration:underline;">Child Safety</span>
707
+
708
+ Child Safety risk assessments were conducted using a team of experts, to assess the model’s capability to produce outputs that could result in Child Safety risks and inform on any necessary and appropriate risk mitigations via fine tuning. We leveraged those expert red teaming sessions to expand the coverage of our evaluation benchmarks through Llama 3 model development. For Llama 3, we conducted new in-depth sessions using objective based methodologies to assess the model risks along multiple attack vectors. We also partnered with content specialists to perform red teaming exercises assessing potentially violating content while taking account of market specific nuances or experiences.
709
+
710
+
711
+ ### Community
712
+
713
+ Generative AI safety requires expertise and tooling, and we believe in the strength of the open community to accelerate its progress. We are active members of open consortiums, including the AI Alliance, Partnership in AI and MLCommons, actively contributing to safety standardization and transparency. We encourage the community to adopt taxonomies like the MLCommons Proof of Concept evaluation to facilitate collaboration and transparency on safety and content evaluations. Our Purple Llama tools are open sourced for the community to use and widely distributed across ecosystem partners including cloud service providers. We encourage community contributions to our [Github repository](https://github.com/meta-llama/PurpleLlama).
714
+
715
+ Finally, we put in place a set of resources including an [output reporting mechanism](https://developers.facebook.com/llama_output_feedback) and [bug bounty program](https://www.facebook.com/whitehat) to continuously improve the Llama technology with the help of the community.
716
+
717
 
718
  ## Ethical Considerations and Limitations
719
+
720
+ The core values of Llama 3 are openness, inclusivity and helpfulness. It is meant to serve everyone, and to work for a wide range of use cases. It is thus designed to be accessible to people across many different backgrounds, experiences and perspectives. Llama 3 addresses users and their needs as they are, without insertion unnecessary judgment or normativity, while reflecting the understanding that even content that may appear problematic in some cases can serve valuable purposes in others. It respects the dignity and autonomy of all users, especially in terms of the values of free thought and expression that power innovation and progress.
721
+
722
+ But Llama 3 is a new technology, and like any new technology, there are risks associated with its use. Testing conducted to date has been in English, and has not covered, nor could it cover, all scenarios. For these reasons, as with all LLMs, Llama 3’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 3 models, developers should perform safety testing and tuning tailored to their specific applications of the model. As outlined in the Responsible Use Guide, we recommend incorporating [Purple Llama](https://github.com/facebookresearch/PurpleLlama) solutions into your workflows and specifically [Llama Guard](https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/) which provides a base model to filter input and output prompts to layer system-level safety on top of model-level safety.
723
+
724
+ Please see the Responsible Use Guide available at [http://llama.meta.com/responsible-use-guide](http://llama.meta.com/responsible-use-guide)
725
+
726
+
727
+ ## Citation instructions
728
+
729
+ @article{llama3modelcard,
730
+
731
+ title={Llama 3 Model Card},
732
+
733
+ author={AI@Meta},
734
+
735
+ year={2024},
736
+
737
+ url = {https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md}
738
+
739
+ }
740
+
741
+ ## Contributors
742
+
743
+ Aaditya Singh; Aaron Grattafiori; Abhimanyu Dubey; Abhinav Jauhri; Abhinav Pandey; Abhishek Kadian; Adam Kelsey; Adi Gangidi; Ahmad Al-Dahle; Ahuva Goldstand; Aiesha Letman; Ajay Menon; Akhil Mathur; Alan Schelten; Alex Vaughan; Amy Yang; Andrei Lupu; Andres Alvarado; Andrew Gallagher; Andrew Gu; Andrew Ho; Andrew Poulton; Andrew Ryan; Angela Fan; Ankit Ramchandani; Anthony Hartshorn; Archi Mitra; Archie Sravankumar; Artem Korenev; Arun Rao; Ashley Gabriel; Ashwin Bharambe; Assaf Eisenman; Aston Zhang; Aurelien Rodriguez; Austen Gregerson; Ava Spataru; Baptiste Roziere; Ben Maurer; Benjamin Leonhardi; Bernie Huang; Bhargavi Paranjape; Bing Liu; Binh Tang; Bobbie Chern; Brani Stojkovic; Brian Fuller; Catalina Mejia Arenas; Chao Zhou; Charlotte Caucheteux; Chaya Nayak; Ching-Hsiang Chu; Chloe Bi; Chris Cai; Chris Cox; Chris Marra; Chris McConnell; Christian Keller; Christoph Feichtenhofer; Christophe Touret; Chunyang Wu; Corinne Wong; Cristian Canton Ferrer; Damien Allonsius; Daniel Kreymer; Daniel Haziza; Daniel Li; Danielle Pintz; Danny Livshits; Danny Wyatt; David Adkins; David Esiobu; David Xu; Davide Testuggine; Delia David; Devi Parikh; Dhruv Choudhary; Dhruv Mahajan; Diana Liskovich; Diego Garcia-Olano; Diego Perino; Dieuwke Hupkes; Dingkang Wang; Dustin Holland; Egor Lakomkin; Elina Lobanova; Xiaoqing Ellen Tan; Emily Dinan; Eric Smith; Erik Brinkman; Esteban Arcaute; Filip Radenovic; Firat Ozgenel; Francesco Caggioni; Frank Seide; Frank Zhang; Gabriel Synnaeve; Gabriella Schwarz; Gabrielle Lee; Gada Badeer; Georgia Anderson; Graeme Nail; Gregoire Mialon; Guan Pang; Guillem Cucurell; Hailey Nguyen; Hannah Korevaar; Hannah Wang; Haroun Habeeb; Harrison Rudolph; Henry Aspegren; Hu Xu; Hugo Touvron; Iga Kozlowska; Igor Molybog; Igor Tufanov; Iliyan Zarov; Imanol Arrieta Ibarra; Irina-Elena Veliche; Isabel Kloumann; Ishan Misra; Ivan Evtimov; Jacob Xu; Jade Copet; Jake Weissman; Jan Geffert; Jana Vranes; Japhet Asher; Jason Park; Jay Mahadeokar; Jean-Baptiste Gaya; Jeet Shah; Jelmer van der Linde; Jennifer Chan; Jenny Hong; Jenya Lee; Jeremy Fu; Jeremy Teboul; Jianfeng Chi; Jianyu Huang; Jie Wang; Jiecao Yu; Joanna Bitton; Joe Spisak; Joelle Pineau; Jon Carvill; Jongsoo Park; Joseph Rocca; Joshua Johnstun; Junteng Jia; Kalyan Vasuden Alwala; Kam Hou U; Kate Plawiak; Kartikeya Upasani; Kaushik Veeraraghavan; Ke Li; Kenneth Heafield; Kevin Stone; Khalid El-Arini; Krithika Iyer; Kshitiz Malik; Kuenley Chiu; Kunal Bhalla; Kyle Huang; Lakshya Garg; Lauren Rantala-Yeary; Laurens van der Maaten; Lawrence Chen; Leandro Silva; Lee Bell; Lei Zhang; Liang Tan; Louis Martin; Lovish Madaan; Luca Wehrstedt; Lukas Blecher; Luke de Oliveira; Madeline Muzzi; Madian Khabsa; Manav Avlani; Mannat Singh; Manohar Paluri; Mark Zuckerberg; Marcin Kardas; Martynas Mankus; Mathew Oldham; Mathieu Rita; Matthew Lennie; Maya Pavlova; Meghan Keneally; Melanie Kambadur; Mihir Patel; Mikayel Samvelyan; Mike Clark; Mike Lewis; Min Si; Mitesh Kumar Singh; Mo Metanat; Mona Hassan; Naman Goyal; Narjes Torabi; Nicolas Usunier; Nikolay Bashlykov; Nikolay Bogoychev; Niladri Chatterji; Ning Dong; Oliver Aobo Yang; Olivier Duchenne; Onur Celebi; Parth Parekh; Patrick Alrassy; Paul Saab; Pavan Balaji; Pedro Rittner; Pengchuan Zhang; Pengwei Li; Petar Vasic; Peter Weng; Polina Zvyagina; Prajjwal Bhargava; Pratik Dubal; Praveen Krishnan; Punit Singh Koura; Qing He; Rachel Rodriguez; Ragavan Srinivasan; Rahul Mitra; Ramon Calderer; Raymond Li; Robert Stojnic; Roberta Raileanu; Robin Battey; Rocky Wang; Rohit Girdhar; Rohit Patel; Romain Sauvestre; Ronnie Polidoro; Roshan Sumbaly; Ross Taylor; Ruan Silva; Rui Hou; Rui Wang; Russ Howes; Ruty Rinott; Saghar Hosseini; Sai Jayesh Bondu; Samyak Datta; Sanjay Singh; Sara Chugh; Sargun Dhillon; Satadru Pan; Sean Bell; Sergey Edunov; Shaoliang Nie; Sharan Narang; Sharath Raparthy; Shaun Lindsay; Sheng Feng; Sheng Shen; Shenghao Lin; Shiva Shankar; Shruti Bhosale; Shun Zhang; Simon Vandenhende; Sinong Wang; Seohyun Sonia Kim; Soumya Batra; Sten Sootla; Steve Kehoe; Suchin Gururangan; Sumit Gupta; Sunny Virk; Sydney Borodinsky; Tamar Glaser; Tamar Herman; Tamara Best; Tara Fowler; Thomas Georgiou; Thomas Scialom; Tianhe Li; Todor Mihaylov; Tong Xiao; Ujjwal Karn; Vedanuj Goswami; Vibhor Gupta; Vignesh Ramanathan; Viktor Kerkez; Vinay Satish Kumar; Vincent Gonguet; Vish Vogeti; Vlad Poenaru; Vlad Tiberiu Mihailescu; Vladan Petrovic; Vladimir Ivanov; Wei Li; Weiwei Chu; Wenhan Xiong; Wenyin Fu; Wes Bouaziz; Whitney Meers; Will Constable; Xavier Martinet; Xiaojian Wu; Xinbo Gao; Xinfeng Xie; Xuchao Jia; Yaelle Goldschlag; Yann LeCun; Yashesh Gaur; Yasmine Babaei; Ye Qi; Yenda Li; Yi Wen; Yiwen Song; Youngjin Nam; Yuchen Hao; Yuchen Zhang; Yun Wang; Yuning Mao; Yuzi He; Zacharie Delpierre Coudert; Zachary DeVito; Zahra Hankir; Zhaoduo Wen; Zheng Yan; Zhengxing Chen; Zhenyu Yang; Zoe Papakipos