DBMe commited on
Commit
639ca63
·
verified ·
1 Parent(s): 41d89d0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -109
README.md CHANGED
@@ -97,120 +97,20 @@ model-index:
97
  name: Open LLM Leaderboard
98
  ---
99
 
 
100
 
 
 
101
 
102
- <p style="font-size:20px;" align="center">
103
- 🏠 <a href="https://wizardlm.github.io/WizardLM2" target="_blank">WizardLM-2 Release Blog</a> </p>
104
- <p align="center">
105
- 🤗 <a href="https://huggingface.co/collections/microsoft/wizardlm-2-661d403f71e6c8257dbd598a" target="_blank">HF Repo</a> •🐱 <a href="https://github.com/victorsungo/WizardLM/tree/main/WizardLM-2" target="_blank">Github Repo</a> • 🐦 <a href="https://twitter.com/WizardLM_AI" target="_blank">Twitter</a> • 📃 <a href="https://arxiv.org/abs/2304.12244" target="_blank">[WizardLM]</a> • 📃 <a href="https://arxiv.org/abs/2306.08568" target="_blank">[WizardCoder]</a> • 📃 <a href="https://arxiv.org/abs/2308.09583" target="_blank">[WizardMath]</a> <br>
106
- </p>
107
- <p align="center">
108
- 👋 Join our <a href="https://discord.gg/VZjjHtWrKs" target="_blank">Discord</a>
109
- </p>
110
 
111
- ## See [here](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B) for the WizardLM-2-7B re-upload.
112
 
113
- ## News 🔥🔥🔥 [2024/04/15]
114
 
115
- We introduce and opensource WizardLM-2, our next generation state-of-the-art large language models,
116
- which have improved performance on complex chat, multilingual, reasoning and agent.
117
- New family includes three cutting-edge models: WizardLM-2 8x22B, WizardLM-2 70B, and WizardLM-2 7B.
118
 
119
- - WizardLM-2 8x22B is our most advanced model, demonstrates highly competitive performance compared to those leading proprietary works
120
- and consistently outperforms all the existing state-of-the-art opensource models.
121
- - WizardLM-2 70B reaches top-tier reasoning capabilities and is the first choice in the same size.
122
- - WizardLM-2 7B is the fastest and achieves comparable performance with existing 10x larger opensource leading models.
123
-
124
- For more details of WizardLM-2 please read our [release blog post](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) and upcoming paper.
125
-
126
-
127
- ## Model Details
128
-
129
- * **Model name**: WizardLM-2 8x22B
130
- * **Developed by**: WizardLM@Microsoft AI
131
- * **Model type**: Mixture of Experts (MoE)
132
- * **Base model**: [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1)
133
- * **Parameters**: 141B
134
- * **Language(s)**: Multilingual
135
- * **Blog**: [Introducing WizardLM-2](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/)
136
- * **Repository**: [https://github.com/nlpxucan/WizardLM](https://github.com/nlpxucan/WizardLM)
137
- * **Paper**: WizardLM-2 (Upcoming)
138
- * **License**: Apache2.0
139
-
140
-
141
- ## Model Capacities
142
-
143
-
144
- **MT-Bench**
145
-
146
- We also adopt the automatic MT-Bench evaluation framework based on GPT-4 proposed by lmsys to assess the performance of models.
147
- The WizardLM-2 8x22B even demonstrates highly competitive performance compared to the most advanced proprietary models.
148
- Meanwhile, WizardLM-2 7B and WizardLM-2 70B are all the top-performing models among the other leading baselines at 7B to 70B model scales.
149
-
150
- <p align="center" width="100%">
151
- <a ><img src="https://web.archive.org/web/20240415175608im_/https://wizardlm.github.io/WizardLM2/static/images/mtbench.png" alt="MTBench" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
152
- </p>
153
-
154
-
155
- **Human Preferences Evaluation**
156
-
157
- We carefully collected a complex and challenging set consisting of real-world instructions, which includes main requirements of humanity, such as writing, coding, math, reasoning, agent, and multilingual.
158
- We report the win:loss rate without tie:
159
-
160
- - WizardLM-2 8x22B is just slightly falling behind GPT-4-1106-preview, and significantly stronger than Command R Plus and GPT4-0314.
161
- - WizardLM-2 70B is better than GPT4-0613, Mistral-Large, and Qwen1.5-72B-Chat.
162
- - WizardLM-2 7B is comparable with Qwen1.5-32B-Chat, and surpasses Qwen1.5-14B-Chat and Starling-LM-7B-beta.
163
-
164
- <p align="center" width="100%">
165
- <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/winall.png" alt="Win" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
166
- </p>
167
-
168
-
169
-
170
-
171
-
172
- ## Method Overview
173
- We built a **fully AI powered synthetic training system** to train WizardLM-2 models, please refer to our [blog](https://web.archive.org/web/20240415221214/https://wizardlm.github.io/WizardLM2/) for more details of this system.
174
-
175
- <p align="center" width="100%">
176
- <a ><img src="https://web.archive.org/web/20240415163303im_/https://wizardlm.github.io/WizardLM2/static/images/exp_1.png" alt="Method" style="width: 96%; min-width: 300px; display: block; margin: auto;"></a>
177
- </p>
178
-
179
-
180
-
181
-
182
-
183
- ## Usage
184
-
185
- ❗<b>Note for model system prompts usage:</b>
186
-
187
-
188
- <b>WizardLM-2</b> adopts the prompt format from <b>Vicuna</b> and supports **multi-turn** conversation. The prompt should be as following:
189
-
190
- ```
191
- A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful,
192
- detailed, and polite answers to the user's questions. USER: Hi ASSISTANT: Hello.</s>
193
- USER: Who are you? ASSISTANT: I am WizardLM.</s>......
194
- ```
195
-
196
- <b> Inference WizardLM-2 Demo Script</b>
197
-
198
- We provide a WizardLM-2 inference demo [code](https://github.com/nlpxucan/WizardLM/tree/main/demo) on our github.
199
-
200
-
201
-
202
-
203
-
204
- # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
205
- Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_alpindale__WizardLM-2-8x22B)
206
 
207
- | Metric |Value|
208
- |-------------------|----:|
209
- |Avg. |32.61|
210
- |IFEval (0-Shot) |52.72|
211
- |BBH (3-Shot) |48.58|
212
- |MATH Lvl 5 (4-Shot)|22.28|
213
- |GPQA (0-shot) |17.56|
214
- |MuSR (0-shot) |14.54|
215
- |MMLU-PRO (5-shot) |39.96|
216
 
 
 
97
  name: Open LLM Leaderboard
98
  ---
99
 
100
+ Quantized model => https://huggingface.co/alpindale/WizardLM-2-8x22B
101
 
102
+ **Quantization Details:**
103
+ Quantization is done using turboderp's ExLlamaV2 v0.2.2.
104
 
105
+ I use the default calibration datasets and arguments. The repo also includes a "measurement.json" file, which was used during the quantization process.
 
 
 
 
 
 
 
106
 
107
+ For models with bits per weight (BPW) over 6.0, I default to quantizing the `lm_head` layer at 8 bits instead of the standard 6 bits.
108
 
 
109
 
 
 
 
110
 
111
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
112
 
113
+ **Who are you? What's with these weird BPWs on [insert model here]?**
114
+ I specialize in optimized EXL2 quantization for models in the 70B to 100B+ range, specifically tailored for 48GB VRAM setups. My rig is built using 2 x 3090s with a Ryzen APU (APU used solely for desktop output—no VRAM wasted on the 3090s). I use TabbyAPI for inference, targeting context sizes between 32K and 64K.
 
 
 
 
 
 
 
115
 
116
+ Every model I upload includes a `config.yml` file with my ideal TabbyAPI settings. If you're using my config, don’t forget to set `PYTORCH_CUDA_ALLOC_CONF=backend:cudaMallocAsync` to save some VRAM.