TheBloke commited on
Commit
302f7e2
1 Parent(s): fabf1d4

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +449 -0
README.md ADDED
@@ -0,0 +1,449 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Xwin-LM/Xwin-LM-13B-V0.2
3
+ inference: false
4
+ license: llama2
5
+ model_creator: Xwin-LM
6
+ model_name: Xwin LM 13B v0.2
7
+ model_type: llama
8
+ prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
9
+ The assistant gives helpful, detailed, and polite answers to the user''s questions.
10
+ USER: {prompt} ASSISTANT:
11
+
12
+ '
13
+ quantized_by: TheBloke
14
+ ---
15
+
16
+ <!-- header start -->
17
+ <!-- 200823 -->
18
+ <div style="width: auto; margin-left: auto; margin-right: auto">
19
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
20
+ </div>
21
+ <div style="display: flex; justify-content: space-between; width: 100%;">
22
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
23
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
24
+ </div>
25
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
26
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
27
+ </div>
28
+ </div>
29
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
30
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
31
+ <!-- header end -->
32
+
33
+ # Xwin LM 13B v0.2 - AWQ
34
+ - Model creator: [Xwin-LM](https://huggingface.co/Xwin-LM)
35
+ - Original model: [Xwin LM 13B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
36
+
37
+ <!-- description start -->
38
+ ## Description
39
+
40
+ This repo contains AWQ model files for [Xwin-LM's Xwin LM 13B v0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2).
41
+
42
+
43
+ ### About AWQ
44
+
45
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
46
+
47
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of Llama AWQ models for high-throughput concurrent inference in multi-user server scenarios.
48
+
49
+ As of September 25th 2023, preliminary Llama-only AWQ support has also been added to [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference).
50
+
51
+ Note that, at the time of writing, overall throughput is still lower than running vLLM or TGI with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
52
+ <!-- description end -->
53
+ <!-- repositories-available start -->
54
+ ## Repositories available
55
+
56
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-AWQ)
57
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GPTQ)
58
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-GGUF)
59
+ * [Xwin-LM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2)
60
+ <!-- repositories-available end -->
61
+
62
+ <!-- prompt-template start -->
63
+ ## Prompt template: Vicuna
64
+
65
+ ```
66
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
67
+
68
+ ```
69
+
70
+ <!-- prompt-template end -->
71
+
72
+
73
+ <!-- README_AWQ.md-provided-files start -->
74
+ ## Provided files, and AWQ parameters
75
+
76
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
77
+
78
+ Models are released as sharded safetensors files.
79
+
80
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
81
+ | ------ | ---- | -- | ----------- | ------- | ---- |
82
+ | [main](https://huggingface.co/TheBloke/Xwin-LM-13B-v0.2-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.25 GB
83
+
84
+ <!-- README_AWQ.md-provided-files end -->
85
+
86
+ <!-- README_AWQ.md-use-from-vllm start -->
87
+ ## Serving this model from vLLM
88
+
89
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
90
+
91
+ Note: at the time of writing, vLLM has not yet done a new release with AWQ support.
92
+
93
+ If you try the vLLM examples below and get an error about `quantization` being unrecognised, or other AWQ-related issues, please install vLLM from Github source.
94
+
95
+ - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
96
+
97
+ ```shell
98
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/Xwin-LM-13B-v0.2-AWQ --quantization awq --dtype half
99
+ ```
100
+
101
+ When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
102
+
103
+ ```python
104
+ from vllm import LLM, SamplingParams
105
+
106
+ prompts = [
107
+ "Hello, my name is",
108
+ "The president of the United States is",
109
+ "The capital of France is",
110
+ "The future of AI is",
111
+ ]
112
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
113
+
114
+ llm = LLM(model="TheBloke/Xwin-LM-13B-v0.2-AWQ", quantization="awq", dtype="half")
115
+
116
+ outputs = llm.generate(prompts, sampling_params)
117
+
118
+ # Print the outputs.
119
+ for output in outputs:
120
+ prompt = output.prompt
121
+ generated_text = output.outputs[0].text
122
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
123
+ ```
124
+ <!-- README_AWQ.md-use-from-vllm start -->
125
+
126
+ <!-- README_AWQ.md-use-from-tgi start -->
127
+ ## Serving this model from Text Generation Inference (TGI)
128
+
129
+ Use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
130
+
131
+ Example Docker parameters:
132
+
133
+ ```shell
134
+ --model-id TheBloke/Xwin-LM-13B-v0.2-AWQ --port 3000 --quantize awq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
135
+ ```
136
+
137
+ Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
138
+
139
+ ```shell
140
+ pip3 install huggingface-hub
141
+ ```
142
+
143
+ ```python
144
+ from huggingface_hub import InferenceClient
145
+
146
+ endpoint_url = "https://your-endpoint-url-here"
147
+
148
+ prompt = "Tell me about AI"
149
+ prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
150
+
151
+ '''
152
+
153
+ client = InferenceClient(endpoint_url)
154
+ response = client.text_generation(prompt,
155
+ max_new_tokens=128,
156
+ do_sample=True,
157
+ temperature=0.7,
158
+ top_p=0.95,
159
+ top_k=40,
160
+ repetition_penalty=1.1)
161
+
162
+ print(f"Model output: {response}")
163
+ ```
164
+ <!-- README_AWQ.md-use-from-tgi end -->
165
+
166
+ <!-- README_AWQ.md-use-from-python start -->
167
+ ## How to use this AWQ model from Python code
168
+
169
+ ### Install the necessary packages
170
+
171
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.1.1 or later
172
+
173
+ ```shell
174
+ pip3 install autoawq
175
+ ```
176
+
177
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
178
+
179
+ ```shell
180
+ pip3 uninstall -y autoawq
181
+ git clone https://github.com/casper-hansen/AutoAWQ
182
+ cd AutoAWQ
183
+ pip3 install .
184
+ ```
185
+
186
+ ### You can then try the following example code
187
+
188
+ ```python
189
+ from awq import AutoAWQForCausalLM
190
+ from transformers import AutoTokenizer
191
+
192
+ model_name_or_path = "TheBloke/Xwin-LM-13B-v0.2-AWQ"
193
+
194
+ # Load model
195
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
196
+ trust_remote_code=False, safetensors=True)
197
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
198
+
199
+ prompt = "Tell me about AI"
200
+ prompt_template=f'''A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
201
+
202
+ '''
203
+
204
+ print("\n\n*** Generate:")
205
+
206
+ tokens = tokenizer(
207
+ prompt_template,
208
+ return_tensors='pt'
209
+ ).input_ids.cuda()
210
+
211
+ # Generate output
212
+ generation_output = model.generate(
213
+ tokens,
214
+ do_sample=True,
215
+ temperature=0.7,
216
+ top_p=0.95,
217
+ top_k=40,
218
+ max_new_tokens=512
219
+ )
220
+
221
+ print("Output: ", tokenizer.decode(generation_output[0]))
222
+
223
+ """
224
+ # Inference should be possible with transformers pipeline as well in future
225
+ # But currently this is not yet supported by AutoAWQ (correct as of September 25th 2023)
226
+ from transformers import pipeline
227
+
228
+ print("*** Pipeline:")
229
+ pipe = pipeline(
230
+ "text-generation",
231
+ model=model,
232
+ tokenizer=tokenizer,
233
+ max_new_tokens=512,
234
+ do_sample=True,
235
+ temperature=0.7,
236
+ top_p=0.95,
237
+ top_k=40,
238
+ repetition_penalty=1.1
239
+ )
240
+
241
+ print(pipe(prompt_template)[0]['generated_text'])
242
+ """
243
+ ```
244
+ <!-- README_AWQ.md-use-from-python end -->
245
+
246
+ <!-- README_AWQ.md-compatibility start -->
247
+ ## Compatibility
248
+
249
+ The files provided are tested to work with:
250
+
251
+ - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
252
+ - [vLLM](https://github.com/vllm-project/vllm)
253
+ - [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
254
+
255
+ TGI merged AWQ support on September 25th, 2023: [TGI PR #1054](https://github.com/huggingface/text-generation-inference/pull/1054). Use the `:latest` Docker container until the next TGI release is made.
256
+
257
+ <!-- README_AWQ.md-compatibility end -->
258
+
259
+ <!-- footer start -->
260
+ <!-- 200823 -->
261
+ ## Discord
262
+
263
+ For further support, and discussions on these models and AI in general, join us at:
264
+
265
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
266
+
267
+ ## Thanks, and how to contribute
268
+
269
+ Thanks to the [chirper.ai](https://chirper.ai) team!
270
+
271
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
272
+
273
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
274
+
275
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
276
+
277
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
278
+
279
+ * Patreon: https://patreon.com/TheBlokeAI
280
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
281
+
282
+ **Special thanks to**: Aemon Algiz.
283
+
284
+ **Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
285
+
286
+
287
+ Thank you to all my generous patrons and donaters!
288
+
289
+ And thank you again to a16z for their generous grant.
290
+
291
+ <!-- footer end -->
292
+
293
+ # Original model card: Xwin-LM's Xwin LM 13B v0.2
294
+
295
+
296
+ <h3 align="center">
297
+ Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment
298
+ </h3>
299
+
300
+ <p align="center">
301
+ <a href="https://github.com/Xwin-LM/Xwin-LM"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a><a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
302
+ </p>
303
+
304
+
305
+
306
+
307
+ **Step up your LLM alignment with Xwin-LM!**
308
+
309
+ Xwin-LM aims to develop and open-source alignment technologies for large language models, including supervised fine-tuning (SFT), reward models (RM), reject sampling, reinforcement learning from human feedback (RLHF), etc. Our first release, built-upon on the Llama2 base models, ranked **TOP-1** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Notably, it's **the first to surpass GPT-4** on this benchmark. The project will be continuously updated.
310
+
311
+ ## News
312
+
313
+ - 💥 [Oct 12, 2023] [Xwin-LM-7B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2) and [Xwin-LM-13B-V0.2](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2) have been released, with improved comparison data and RL training (i.e., PPO). Their winrates v.s. GPT-4 have increased significantly, reaching **59.83%** (7B model) and **70.36%** (13B model) respectively. The 70B model will be released soon.
314
+ - 💥 [Sep, 2023] We released [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), which has achieved a win-rate against Davinci-003 of **95.57%** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/) benchmark, ranking as **TOP-1** on AlpacaEval. **It was the FIRST model surpassing GPT-4** on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/). Also note its winrate v.s. GPT-4 is **60.61**.
315
+ - 🔍 [Sep, 2023] RLHF plays crucial role in the strong performance of Xwin-LM-V0.1 release!
316
+ - 💥 [Sep, 2023] We released [Xwin-LM-13B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1), which has achieved **91.76%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 13B models.
317
+ - 💥 [Sep, 2023] We released [Xwin-LM-7B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1), which has achieved **87.82%** win-rate on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), ranking as **top-1** among all 7B models.
318
+
319
+
320
+ ## Model Card
321
+ | Model | Checkpoint | Report | License |
322
+ |------------|------------|-------------|------------------|
323
+ |Xwin-LM-7B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.2" target="_blank">HF Link</a> | 📃**Coming soon (Stay tuned)** | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
324
+ |Xwin-LM-13B-V0.2| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.2" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
325
+ |Xwin-LM-7B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-7B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
326
+ |Xwin-LM-13B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-13B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
327
+ |Xwin-LM-70B-V0.1| 🤗 <a href="https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1" target="_blank">HF Link</a> | | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
328
+ ## Benchmarks
329
+
330
+ ### Xwin-LM performance on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/).
331
+
332
+ The table below displays the performance of Xwin-LM on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/), where evaluates its win-rate against Text-Davinci-003 across 805 questions. To provide a comprehensive evaluation, we present, for the first time, the win-rate against ChatGPT and GPT-4 as well. Our Xwin-LM model family establish a new state-of-the-art performance across all metrics. Notably, Xwin-LM-70B-V0.1 has eclipsed GPT-4 for the first time, achieving an impressive win-rate of **95.57%** to Text-Davinci-003 and **60.61%** to GPT-4.
333
+
334
+ | **Model** | **AlpacaEval (winrate %)** | **AlpacaEval (winrate %)** |**AlpacaEval (winrate %)** |
335
+ |----------------------------------|------------|----------|-------------|
336
+ | | **v.s. Text-Davinci-003** | **v.s. ChatGPT** | **v.s. GPT4**|
337
+ | **Xwin-LM-70B-V0.1** | **95.57** | **87.50** | **60.61** |
338
+ | GPT-4 | 95.28 | 84.66 | 50.00 |
339
+ | **Xwin-LM-13B-V0.2** | **93.22** | **87.14** | **70.36** |
340
+ | WizardLM-70B-V1.0 | 92.91 | 80.19 | 46.70 |
341
+ | Llama-2-70B-Chat | 92.66 | 81.97 | 51.19 |
342
+ | **Xwin-LM-13B-V0.1** | **91.76** | **81.79** | **55.30** |
343
+ | ChatGPT | 89.37 | 50.00 | 16.60 |
344
+ | **Xwin-LM-7B-V0.2** | **89.31** | **79.60** | **59.83** |
345
+ | WizardLM-13B-V1.2 | 89.17 | 75.75 | 41.91 |
346
+ | **Xwin-LM-7B-V0.1** | **87.35** | **76.40** | **47.57** |
347
+ | Llama-2-13B-Chat | 81.09 | 64.22 | 30.92 |
348
+
349
+
350
+ ### Case study on [AlpacaEval](https://tatsu-lab.github.io/alpaca_eval/)
351
+
352
+ | Prompts | GPT4 response | Xwin-LM response |
353
+ |----------------|------------|--------|
354
+ |I've put a plate on top of an apple, and take the plate from kitchen to dinning room. Where is the apple now|The apple is now in the dining room, as it was underneath the plate when you moved the plate from the kitchen to the dining room.|The apple is still in the kitchen, as you have only moved the plate. The apple is under the plate, so if you want to bring the apple to the dining room, you should either move the apple directly or put the apple on the plate and then move the plate to the dining room.|
355
+ | Answer the following question.<br> How do you say "good evening" in French. | In French, "good evening" is said as "bonsoir." | In French, "good evening" is translated to "bonsoir." This greeting is used in the evening and nighttime hours, typically after 6 p.m. It is a combination of "bon," which means "good," and "soir," which means "evening." To pronounce it, say "bone-swahr." |
356
+
357
+
358
+ ### Xwin-LM performance on NLP foundation tasks.
359
+
360
+ The following table provides a comparison of Xwin-LMs with other LLMs on NLP foundation tasks in [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
361
+
362
+ | Model | MMLU 5-shot | ARC 25-shot | TruthfulQA 0-shot | HellaSwag 10-shot | Average |
363
+ |------------------|-------------|-------------|-------------------|-------------------|------------|
364
+ | Text-davinci-003 | 56.9 | **85.2** | 59.3 | 82.2 | 70.9 |
365
+ |Vicuna-13b 1.1 | 51.3 | 53.0 | 51.8 | 80.1 | 59.1 |
366
+ |Guanaco 30B | 57.6 | 63.7 | 50.7 | 85.1 | 64.3 |
367
+ | WizardLM-7B 1.0 | 42.7 | 51.6 | 44.7 | 77.7 | 54.2 |
368
+ | WizardLM-13B 1.0 | 52.3 | 57.2 | 50.5 | 81.0 | 60.2 |
369
+ | WizardLM-30B 1.0 | 58.8 | 62.5 | 52.4 | 83.3 | 64.2|
370
+ | Llama-2-7B-Chat | 48.3 | 52.9 | 45.6 | 78.6 | 56.4 |
371
+ | Llama-2-13B-Chat | 54.6 | 59.0 | 44.1 | 81.9 | 59.9 |
372
+ | Llama-2-70B-Chat | 63.9 | 64.6 | 52.8 | 85.9 | 66.8 |
373
+ | **Xwin-LM-7B-V0.1** | 49.7 | 56.2 | 48.1 | 79.5 | 58.4 |
374
+ | **Xwin-LM-13B-V0.1** | 56.6 | 62.4 | 45.5 | 83.0 | 61.9 |
375
+ | **Xwin-LM-70B-V0.1** | **69.6** | 70.5 | **60.1** | **87.1** | **71.8** |
376
+ | **Xwin-LM-7B-V0.2** | 50.0 | 56.4 | 49.5 | 78.9 | 58.7 |
377
+ | **Xwin-LM-13B-V0.2** | 56.6 | 61.5 | 43.8 | 82.9 | 61.2 |
378
+
379
+
380
+ ## Inference
381
+
382
+ ### Conversation Template
383
+ To obtain desired results, please strictly follow the conversation templates when utilizing our model for inference. Our model adopts the prompt format established by [Vicuna](https://github.com/lm-sys/FastChat) and is equipped to support **multi-turn** conversations.
384
+ ```
385
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: Hi! ASSISTANT: Hello.</s>USER: Who are you? ASSISTANT: I am Xwin-LM.</s>......
386
+ ```
387
+
388
+ ### HuggingFace Example
389
+
390
+ ```python
391
+ from transformers import AutoTokenizer, AutoModelForCausalLM
392
+
393
+ model = AutoModelForCausalLM.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
394
+ tokenizer = AutoTokenizer.from_pretrained("Xwin-LM/Xwin-LM-7B-V0.1")
395
+ (
396
+ prompt := "A chat between a curious user and an artificial intelligence assistant. "
397
+ "The assistant gives helpful, detailed, and polite answers to the user's questions. "
398
+ "USER: Hello, can you help me? "
399
+ "ASSISTANT:"
400
+ )
401
+ inputs = tokenizer(prompt, return_tensors="pt")
402
+ samples = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
403
+ output = tokenizer.decode(samples[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
404
+ print(output)
405
+ # Of course! I'm here to help. Please feel free to ask your question or describe the issue you're having, and I'll do my best to assist you.
406
+ ```
407
+
408
+
409
+ ### vLLM Example
410
+ Because Xwin-LM is based on Llama2, it also offers support for rapid inference using [vLLM](https://github.com/vllm-project/vllm). Please refer to [vLLM](https://github.com/vllm-project/vllm) for detailed installation instructions.
411
+ ```python
412
+ from vllm import LLM, SamplingParams
413
+ (
414
+ prompt := "A chat between a curious user and an artificial intelligence assistant. "
415
+ "The assistant gives helpful, detailed, and polite answers to the user's questions. "
416
+ "USER: Hello, can you help me? "
417
+ "ASSISTANT:"
418
+ )
419
+ sampling_params = SamplingParams(temperature=0.7, max_tokens=4096)
420
+ llm = LLM(model="Xwin-LM/Xwin-LM-7B-V0.1")
421
+ outputs = llm.generate([prompt,], sampling_params)
422
+
423
+ for output in outputs:
424
+ prompt = output.prompt
425
+ generated_text = output.outputs[0].text
426
+ print(generated_text)
427
+ ```
428
+
429
+ ## TODO
430
+
431
+ - [ ] Release the source code
432
+ - [ ] Release more capabilities, such as math, reasoning, and etc.
433
+
434
+ ## Citation
435
+ Please consider citing our work if you use the data or code in this repo.
436
+ ```
437
+ @software{xwin-lm,
438
+ title = {Xwin-LM},
439
+ author = {Xwin-LM Team},
440
+ url = {https://github.com/Xwin-LM/Xwin-LM},
441
+ version = {pre-release},
442
+ year = {2023},
443
+ month = {9},
444
+ }
445
+ ```
446
+
447
+ ## Acknowledgements
448
+
449
+ Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), [AlpacaFarm](https://github.com/tatsu-lab/alpaca_farm), and [vLLM](https://github.com/vllm-project/vllm).