dongx1x commited on
Commit
3488d15
1 Parent(s): c1e83cd

add encrypted models

Browse files

Signed-off-by: Xiaocheng Dong <xiaocheng.dong@intel.com>

.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.aes filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ inference: false
6
+ tags:
7
+ - facebook
8
+ - meta
9
+ - pytorch
10
+ - llama
11
+ - llama-2
12
+ - sharded
13
+ ---
14
+ # **llama-2-chat-7b-hf (sharded)**
15
+ This is a sharded version of Meta's Llama 2 chat 7b model, specifically the hugging face version.
16
+
17
+ All details below are copied from the original repo.
18
+
19
+ Colab notebook for sharding: https://colab.research.google.com/drive/1f1q9qc56wzB_7-bjgNyLlO6f28ui1esQ
20
+
21
+ Colab notebook for inference: https://colab.research.google.com/drive/1zxwaTSvd6PSHbtyaoa7tfedAS31j_N6m
22
+
23
+ ## Inference with Google Colab and HuggingFace 🤗
24
+
25
+ Get started by saving your own copy of this [fLlama_Inference notebook](https://colab.research.google.com/drive/1Ow5cQ0JNv-vXsT-apCceH6Na3b4L7JyW?usp=sharing).
26
+
27
+ You will be able to run inference using a free Colab notebook if you select a gpu runtime. See the notebook for more details.
28
+
29
+ ~
30
+
31
+ # **Llama 2**
32
+ Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
33
+
34
+ ## Model Details
35
+ *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
36
+
37
+ Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
38
+
39
+ **Model Developers** Meta
40
+
41
+ **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
42
+
43
+ **Input** Models input text only.
44
+
45
+ **Output** Models generate text only.
46
+
47
+ **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
48
+
49
+
50
+ ||Training Data|Params|Content Length|GQA|Tokens|LR|
51
+ |---|---|---|---|---|---|---|
52
+ |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
53
+ |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
54
+ |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>|
55
+
56
+ *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
57
+
58
+ **Model Dates** Llama 2 was trained between January 2023 and July 2023.
59
+
60
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
61
+
62
+ **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
63
+
64
+ **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
65
+
66
+ ## Intended Use
67
+ **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
68
+
69
+ To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
70
+
71
+ **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
72
+
73
+ ## Hardware and Software
74
+ **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
75
+
76
+ **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
77
+
78
+ ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
79
+ |---|---|---|---|
80
+ |Llama 2 7B|184320|400|31.22|
81
+ |Llama 2 13B|368640|400|62.44|
82
+ |Llama 2 70B|1720320|400|291.42|
83
+ |Total|3311616||539.00|
84
+
85
+ **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
86
+
87
+ ## Training Data
88
+ **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
89
+
90
+ **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
91
+
92
+ ## Evaluation Results
93
+
94
+ In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
95
+
96
+ |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
97
+ |---|---|---|---|---|---|---|---|---|---|
98
+ |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
99
+ |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
100
+ |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
101
+ |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
102
+ |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
103
+ |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
104
+ |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
105
+
106
+ **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
107
+
108
+ |||TruthfulQA|Toxigen|
109
+ |---|---|---|---|
110
+ |Llama 1|7B|27.42|23.00|
111
+ |Llama 1|13B|41.74|23.08|
112
+ |Llama 1|33B|44.19|22.57|
113
+ |Llama 1|65B|48.71|21.77|
114
+ |Llama 2|7B|33.29|**21.25**|
115
+ |Llama 2|13B|41.86|26.10|
116
+ |Llama 2|70B|**50.18**|24.60|
117
+
118
+ **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
119
+
120
+
121
+ |||TruthfulQA|Toxigen|
122
+ |---|---|---|---|
123
+ |Llama-2-Chat|7B|57.04|**0.00**|
124
+ |Llama-2-Chat|13B|62.18|**0.00**|
125
+ |Llama-2-Chat|70B|**64.14**|0.01|
126
+
127
+ **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
128
+
129
+ ## Ethical Considerations and Limitations
130
+ Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
131
+
132
+ Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
133
+
134
+ ## Reporting Issues
135
+ Please report any software “bug,” or other problems with the models through one of the following means:
136
+ - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
137
+ - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
138
+ - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
139
+
140
+ ## Llama Model Index
141
+ |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
142
+ |---|---|---|---|---|
143
+ |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
144
+ |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
145
+ |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
config.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:51fecd5be226dd402600202e73d6d46446a9efcc35b8eb3356a2c1f2387f23f4
3
+ size 698
encryption-config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"kbs": "", "kbs_url": "", "key_id": "", "files": ["../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00005-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00003-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00001-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00006-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model.bin.index.json.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/tokenizer.model.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00002-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00007-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/tokenizer_config.json.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/generation_config.json.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/special_tokens_map.json.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/tokenizer.json.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/pytorch_model-00004-of-00007.bin.aes", "../Llama-2-7b-chat-hf-sharded-bf16-aes/config.json.aes"]}
generation_config.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7224c9cf1afaf08c7e7ca327053c5802919811491bb2057e9f8870bf4d9baec8
3
+ size 237
pytorch_model-00001-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bda4ab3e37edb06cc26eba1278870f1fe5cb9426354effa9d1527e5f3abbadf
3
+ size 1981889935
pytorch_model-00002-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2aefebfc0adcc4ccb537135071072e7e8d6a5daccacc996b07da9086b9b71b91
3
+ size 1990296873
pytorch_model-00003-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:795aea532ec2bf18a4ed05760a83361392b5f2f7bc395297dcf8c87ffc9b7ef3
3
+ size 1990296937
pytorch_model-00004-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d335bc7e84cdca95e01baaeb1c8cd0c2af5d16842486f2523e8132cbf240ae7d
3
+ size 1990296937
pytorch_model-00005-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa01855267988e569ae298fde11bba59200c93e8d01dbdbeba1bfcc410124185
3
+ size 1933656773
pytorch_model-00006-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:91dde16e03c6e23815c2f900e2ecb1f01860a940f08c9e1a8b20c238ecb691e1
3
+ size 1933673833
pytorch_model-00007-of-00007.bin.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9f5ac21c4f0b1c5bf49f2e2d4dbfecbc1b19bfb4ffcac9fc4fa9c86409449d25
3
+ size 1656836607
pytorch_model.bin.index.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7f0896e84070f8ead92646072e33deedcf9f2bd146ce3614ea5e31550e84f79
3
+ size 26828
special_tokens_map.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d8b616578915505c65664b160b995fdabe25bcc18719e23f34e85f234c49fde
3
+ size 454
tokenizer.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58e758d0bddd0a2942164b62a0de813c8124dd4cf8aa79bbf9698d41ff309ce9
3
+ size 1842807
tokenizer.model.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00161536b939bd8b8ab4c08e14d9c9ca8f5249c9ed90c25083f0ce082f0d410e
3
+ size 499763
tokenizer_config.json.aes ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38bc82d26e6bf1db26c973fee8d48aee7991a2734b422537cd42d5548cb60a6b
3
+ size 759