Transformers
Inference Endpoints
jncraton commited on
Commit
0449f89
1 Parent(s): 52fe942

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+
6
+
7
+
8
+ <p style="font-size:20px;" align="center">
9
+ 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
10
+ <p align="center">
11
+ 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
12
+ </p>
13
+ <p align="center">
14
+ 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
15
+ </p>
16
+
17
+
18
+
19
+ ## News 🔥🔥🔥 [2024/04/15]
20
+
21
+ We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
22
+ which have improved performance on complex chat, multilingual, reasoning and agent.
23
+ New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
24
+
25
+ - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
26
+ and consistently outperforms all the existing state-of-the-art opensource models.
27
+ - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size. This model weights will be available in the coming days.
28
+ - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
29
+
30
+ For more details of WizardLM-2 please read our [release blog post](https://wizardlm.github.io/WizardLM2) and upcoming paper.
31
+
32
+
33
+ ## Model Details
34
+
35
+ * **Model name**: WizardLM-2 7B
36
+ * **Developed by**: WizardLM@Microsoft AI
37
+ * **Base model**: [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
38
+ * **Parameters**: 7B
39
+ * **Language(s)**: Multilingual
40
+ * **Blog**: [Introducing WizardLM-2](https://wizardlm.github.io/WizardLM2)
41
+ * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
42
+ * **Paper**: WizardLM-2 (Upcoming)
43
+ * **License**: Apache2.0
44
+
45
+
46
+
47
+ ## Model Capacities
48
+
49
+
50
+ **MT-Bench**
51
+
52
+ We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
53
+ The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
54
+ Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
55
+
56
+ <p align="center" width="100%">
57
+ <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
58
+ </p>
59
+
60
+
61
+ **Human Preferences Evaluation**
62
+
63
+ We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
64
+ We report the win:loss rate without tie:
65
+
66
+ - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
67
+ - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
68
+ - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
69
+
70
+ <p align="center" width="100%">
71
+ <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
72
+ </p>
73
+
74
+
75
+
76
+
77
+
78
+ ## Method Overview
79
+ We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://wizardlm.github.io/WizardLM2) for more details of this system.
80
+
81
+ <p align="center" width="100%">
82
+ <a ><img src="https://raw.githubusercontent.com/WizardLM/WizardLM2/main/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
83
+ </p>
84
+
85
+
86
+
87
+
88
+
89
+ ## Usage
90
+
91
+ ❗<b>Note for model system prompts usage:</b>
92
+
93
+
94
+ <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
95
+
96
+ ```
97
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
98
+ detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
99
+ USER: Who are you? ASSISTANT: I am WizardLM.</s>......
100
+ ```
101
+
102
+ <b> Inference WizardLM-2 Demo Script</b>
103
+
104
+ We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>",
4
+ "layer_norm_epsilon": 1e-05,
5
+ "multi_query_attention": true,
6
+ "unk_token": "<unk>"
7
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "transformers_version": "4.36.2"
6
+ }
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd319b1a25d8fb357d942156c1eee46b47b9824be7b9168fd0049da7724761a3
3
+ size 7247791615
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<unk>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 8192,
36
+ "pad_token": "<unk>",
37
+ "padding_side": "right",
38
+ "sp_model_kwargs": {},
39
+ "spaces_between_special_tokens": false,
40
+ "tokenizer_class": "LlamaTokenizer",
41
+ "unk_token": "<unk>",
42
+ "use_default_system_prompt": false
43
+ }
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff