nikita
commited on
Commit
•
4f5335b
1
Parent(s):
b66e7ec
docs: update Readme.md
Browse files
README.md
CHANGED
@@ -19,116 +19,47 @@ tags:
|
|
19 |
- llama
|
20 |
- llama-2
|
21 |
---
|
22 |
-
#
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
|
57 |
-
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
|
68 |
-
|---|---|---|---|
|
69 |
-
|Llama 2 7B|184320|400|31.22|
|
70 |
-
|Llama 2 13B|368640|400|62.44|
|
71 |
-
|Llama 2 70B|1720320|400|291.42|
|
72 |
-
|Total|3311616||539.00|
|
73 |
-
|
74 |
-
**CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
|
75 |
-
|
76 |
-
## Training Data
|
77 |
-
**Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
|
78 |
-
|
79 |
-
**Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
|
80 |
-
|
81 |
-
## Evaluation Results
|
82 |
-
|
83 |
-
In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
|
84 |
-
|
85 |
-
|Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
|
86 |
-
|---|---|---|---|---|---|---|---|---|---|
|
87 |
-
|Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
|
88 |
-
|Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
|
89 |
-
|Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
|
90 |
-
|Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
|
91 |
-
|Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
|
92 |
-
|Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
|
93 |
-
|Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
|
94 |
-
|
95 |
-
**Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
|
96 |
-
|
97 |
-
|||TruthfulQA|Toxigen|
|
98 |
-
|---|---|---|---|
|
99 |
-
|Llama 1|7B|27.42|23.00|
|
100 |
-
|Llama 1|13B|41.74|23.08|
|
101 |
-
|Llama 1|33B|44.19|22.57|
|
102 |
-
|Llama 1|65B|48.71|21.77|
|
103 |
-
|Llama 2|7B|33.29|**21.25**|
|
104 |
-
|Llama 2|13B|41.86|26.10|
|
105 |
-
|Llama 2|70B|**50.18**|24.60|
|
106 |
-
|
107 |
-
**Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
|
108 |
-
|
109 |
-
|
110 |
-
|||TruthfulQA|Toxigen|
|
111 |
-
|---|---|---|---|
|
112 |
-
|Llama-2-Chat|7B|57.04|**0.00**|
|
113 |
-
|Llama-2-Chat|13B|62.18|**0.00**|
|
114 |
-
|Llama-2-Chat|70B|**64.14**|0.01|
|
115 |
-
|
116 |
-
**Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
|
117 |
-
|
118 |
-
## Ethical Considerations and Limitations
|
119 |
-
Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
|
120 |
-
|
121 |
-
Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
|
122 |
-
|
123 |
-
## Reporting Issues
|
124 |
-
Please report any software “bug,” or other problems with the models through one of the following means:
|
125 |
-
- Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
|
126 |
-
- Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
|
127 |
-
- Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
|
128 |
-
|
129 |
-
## Llama Model Index
|
130 |
-
|Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
|
131 |
-
|---|---|---|---|---|
|
132 |
-
|7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
|
133 |
-
|13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
|
134 |
-
|70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
|
|
|
19 |
- llama
|
20 |
- llama-2
|
21 |
---
|
22 |
+
# Custom handler for HF Inference Endpoint for LLMLingua
|
23 |
+
## LLMLingua
|
24 |
+
https://github.com/microsoft/LLMLingua
|
25 |
+
https://llmlingua.com/
|
26 |
+
> To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss
|
27 |
+
|
28 |
+
## Model: NousResearch/Llama-2-7b-hf
|
29 |
+
https://huggingface.co/NousResearch/Llama-2-7b-hf
|
30 |
+
|
31 |
+
## Inference Endpoint Configuration
|
32 |
+
Task: Custom
|
33 |
+
Container Type: Default
|
34 |
+
Instance Type: GPU Nvidia A10G 24Gb
|
35 |
+
|
36 |
+
|
37 |
+
## Usage
|
38 |
+
### Sample payload
|
39 |
+
```json
|
40 |
+
{
|
41 |
+
"inputs": "A long prompt to optimize for the LLM",
|
42 |
+
"parameters": {
|
43 |
+
"instruction": "",
|
44 |
+
"question": "",
|
45 |
+
"target_token": 200,
|
46 |
+
"context_budget": "*1.5",
|
47 |
+
"iterative_size": 100
|
48 |
+
}
|
49 |
+
}
|
50 |
+
```
|
51 |
+
|
52 |
+
|
53 |
+
Prompt sample text:
|
54 |
+
https://raw.githubusercontent.com/FranxYao/chain-of-thought-hub/main/gsm8k/lib_prompt/prompt_hardest.txt
|
55 |
+
|
56 |
+
### Expected output
|
57 |
+
```json
|
58 |
+
{
|
59 |
+
"compressed_prompt": "Question: Sam bought a dozen boxes, each with 30 highlighter pens inside, for $10 each. He reanged five of boxes into packages of sixlters each and sold them $3 per. He sold the rest theters separately at the of three pens $2. How much did make in total, dollars?\nLets think step step\nSam bought 1 boxes x00 oflters.\nHe bought 12 00ters in total\nSam then took5 boxes 6ters0ters\nHe sold these boxes for 5 *5\nAfterelling these boxes there were 30330ters remaining\nese form 330 /30 of three\n sold each for2 each, so made * =0 from\n total, he0 $15\nSince his original1 he earned $120 = $115 in profit.\nThe answer is 115",
|
60 |
+
"origin_tokens": 2365,
|
61 |
+
"compressed_tokens": 174,
|
62 |
+
"ratio": "13.6x",
|
63 |
+
"saving": ", Saving $0.1 in GPT-4."
|
64 |
+
}
|
65 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|