TheBloke commited on
Commit
90af6b4
1 Parent(s): 2ccc0cb

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +305 -0
README.md ADDED
@@ -0,0 +1,305 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged
3
+ inference: false
4
+ license: llama2
5
+ model_creator: TokenBender
6
+ model_name: Llama-2-7B-Chat Code Cherry Pop
7
+ model_type: llama
8
+ prompt_template: 'Below is an instruction that describes a task. Write a response
9
+ that appropriately completes the request.
10
+
11
+
12
+ ### Instruction:
13
+
14
+ {prompt}
15
+
16
+
17
+ ### Response:
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ ---
22
+
23
+ <!-- header start -->
24
+ <!-- 200823 -->
25
+ <div style="width: auto; margin-left: auto; margin-right: auto">
26
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
27
+ </div>
28
+ <div style="display: flex; justify-content: space-between; width: 100%;">
29
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
30
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
31
+ </div>
32
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
33
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
34
+ </div>
35
+ </div>
36
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
37
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
38
+ <!-- header end -->
39
+
40
+ # Llama-2-7B-Chat Code Cherry Pop - AWQ
41
+ - Model creator: [TokenBender](https://huggingface.co/TokenBender)
42
+ - Original model: [Llama-2-7B-Chat Code Cherry Pop](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged)
43
+
44
+ <!-- description start -->
45
+ ## Description
46
+
47
+ This repo contains AWQ model files for [TokenBender's Llama-2-7B-Chat Code Cherry Pop](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged).
48
+
49
+
50
+ ### About AWQ
51
+
52
+ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference.
53
+
54
+ It is also now supported by continuous batching server [vLLM](https://github.com/vllm-project/vllm), allowing use of AWQ models for high-throughput concurrent inference in multi-user server scenarios. Note that, at the time of writing, overall throughput is still lower than running vLLM with unquantised models, however using AWQ enables using much smaller GPUs which can lead to easier deployment and overall cost savings. For example, a 70B model can be run on 1 x 48GB GPU instead of 2 x 80GB.
55
+ <!-- description end -->
56
+ <!-- repositories-available start -->
57
+ ## Repositories available
58
+
59
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-AWQ)
60
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GPTQ)
61
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-GGUF)
62
+ * [TokenBender's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TokenBender/llama2-7b-chat-hf-codeCherryPop-qLoRA-merged)
63
+ <!-- repositories-available end -->
64
+
65
+ <!-- prompt-template start -->
66
+ ## Prompt template: Alpaca
67
+
68
+ ```
69
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
70
+
71
+ ### Instruction:
72
+ {prompt}
73
+
74
+ ### Response:
75
+
76
+ ```
77
+
78
+ <!-- prompt-template end -->
79
+
80
+
81
+ <!-- README_AWQ.md-provided-files start -->
82
+ ## Provided files and AWQ parameters
83
+
84
+ For my first release of AWQ models, I am releasing 128g models only. I will consider adding 32g as well if there is interest, and once I have done perplexity and evaluation comparisons, but at this time 32g models are still not fully tested with AutoAWQ and vLLM.
85
+
86
+ Models are released as sharded safetensors files.
87
+
88
+ | Branch | Bits | GS | AWQ Dataset | Seq Len | Size |
89
+ | ------ | ---- | -- | ----------- | ------- | ---- |
90
+ | [main](https://huggingface.co/TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-AWQ/tree/main) | 4 | 128 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.89 GB
91
+
92
+ <!-- README_AWQ.md-provided-files end -->
93
+
94
+ <!-- README_AWQ.md-use-from-vllm start -->
95
+ ## Serving this model from vLLM
96
+
97
+ Documentation on installing and using vLLM [can be found here](https://vllm.readthedocs.io/en/latest/).
98
+
99
+ - When using vLLM as a server, pass the `--quantization awq` parameter, for example:
100
+
101
+ ```shell
102
+ python3 python -m vllm.entrypoints.api_server --model TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-AWQ --quantization awq
103
+ ```
104
+
105
+ When using vLLM from Python code, pass the `quantization=awq` parameter, for example:
106
+
107
+ ```python
108
+ from vllm import LLM, SamplingParams
109
+
110
+ prompts = [
111
+ "Hello, my name is",
112
+ "The president of the United States is",
113
+ "The capital of France is",
114
+ "The future of AI is",
115
+ ]
116
+ sampling_params = SamplingParams(temperature=0.8, top_p=0.95)
117
+
118
+ llm = LLM(model="TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-AWQ", quantization="awq")
119
+
120
+ outputs = llm.generate(prompts, sampling_params)
121
+
122
+ # Print the outputs.
123
+ for output in outputs:
124
+ prompt = output.prompt
125
+ generated_text = output.outputs[0].text
126
+ print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")
127
+ ```
128
+ <!-- README_AWQ.md-use-from-vllm start -->
129
+
130
+ <!-- README_AWQ.md-use-from-python start -->
131
+ ## How to use this AWQ model from Python code
132
+
133
+ ### Install the necessary packages
134
+
135
+ Requires: [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) 0.0.2 or later
136
+
137
+ ```shell
138
+ pip3 install autoawq
139
+ ```
140
+
141
+ If you have problems installing [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) using the pre-built wheels, install it from source instead:
142
+
143
+ ```shell
144
+ pip3 uninstall -y autoawq
145
+ git clone https://github.com/casper-hansen/AutoAWQ
146
+ cd AutoAWQ
147
+ pip3 install .
148
+ ```
149
+
150
+ ### You can then try the following example code
151
+
152
+ ```python
153
+ from awq import AutoAWQForCausalLM
154
+ from transformers import AutoTokenizer
155
+
156
+ model_name_or_path = "TheBloke/llama2-7b-chat-codeCherryPop-qLoRA-AWQ"
157
+
158
+ # Load model
159
+ model = AutoAWQForCausalLM.from_quantized(model_name_or_path, fuse_layers=True,
160
+ trust_remote_code=False, safetensors=True)
161
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=False)
162
+
163
+ prompt = "Tell me about AI"
164
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
165
+
166
+ ### Instruction:
167
+ {prompt}
168
+
169
+ ### Response:
170
+
171
+ '''
172
+
173
+ print("\n\n*** Generate:")
174
+
175
+ tokens = tokenizer(
176
+ prompt_template,
177
+ return_tensors='pt'
178
+ ).input_ids.cuda()
179
+
180
+ # Generate output
181
+ generation_output = model.generate(
182
+ tokens,
183
+ do_sample=True,
184
+ temperature=0.7,
185
+ top_p=0.95,
186
+ top_k=40,
187
+ max_new_tokens=512
188
+ )
189
+
190
+ print("Output: ", tokenizer.decode(generation_output[0]))
191
+
192
+ # Inference can also be done using transformers' pipeline
193
+ from transformers import pipeline
194
+
195
+ print("*** Pipeline:")
196
+ pipe = pipeline(
197
+ "text-generation",
198
+ model=model,
199
+ tokenizer=tokenizer,
200
+ max_new_tokens=512,
201
+ do_sample=True,
202
+ temperature=0.7,
203
+ top_p=0.95,
204
+ top_k=40,
205
+ repetition_penalty=1.1
206
+ )
207
+
208
+ print(pipe(prompt_template)[0]['generated_text'])
209
+ ```
210
+ <!-- README_AWQ.md-use-from-python end -->
211
+
212
+ <!-- README_AWQ.md-compatibility start -->
213
+ ## Compatibility
214
+
215
+ The files provided are tested to work with [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), and [vLLM](https://github.com/vllm-project/vllm).
216
+
217
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is not yet compatible with AWQ, but a PR is open which should bring support soon: [TGI PR #781](https://github.com/huggingface/text-generation-inference/issues/781).
218
+ <!-- README_AWQ.md-compatibility end -->
219
+
220
+ <!-- footer start -->
221
+ <!-- 200823 -->
222
+ ## Discord
223
+
224
+ For further support, and discussions on these models and AI in general, join us at:
225
+
226
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
227
+
228
+ ## Thanks, and how to contribute
229
+
230
+ Thanks to the [chirper.ai](https://chirper.ai) team!
231
+
232
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
233
+
234
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
235
+
236
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
237
+
238
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
239
+
240
+ * Patreon: https://patreon.com/TheBlokeAI
241
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
242
+
243
+ **Special thanks to**: Aemon Algiz.
244
+
245
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
246
+
247
+
248
+ Thank you to all my generous patrons and donaters!
249
+
250
+ And thank you again to a16z for their generous grant.
251
+
252
+ <!-- footer end -->
253
+
254
+ # Original model card: TokenBender's Llama-2-7B-Chat Code Cherry Pop
255
+
256
+ ---
257
+
258
+ ### Overview:
259
+ description:
260
+
261
+ This is a llama2 7B HF chat model fine-tuned on 122k code instructions. In my early experiments it seems to be doing very well.
262
+
263
+ additional_info:
264
+
265
+ It's a bottom of the barrel model 😂 but after quantization it can be
266
+ valuable for sure. It definitely proves that a 7B can be useful for boilerplate
267
+ code stuff though.
268
+
269
+ ### Plans:
270
+ next_steps: "I've a few things in mind and after that this will be more valuable."
271
+
272
+ tasks:
273
+
274
+ - name: "I'll quantize these"
275
+ timeline: "Possibly tonight or tomorrow in the day"
276
+ result: "Then it can be run locally with 4G ram."
277
+ - name: "I've used alpaca style instruction tuning"
278
+ improvement: |
279
+ I'll switch to llama2 style [INST]<<SYS>> style and see if
280
+ it improves anything.
281
+ - name: "HumanEval report and checking for any training data leaks"
282
+ - attempt: "I'll try 8k context via RoPE enhancement"
283
+ hypothesis: "Let's see if that degrades performance or not."
284
+ commercial_use: |
285
+ So far I think this can be used commercially but this is a adapter on Meta's llama2 with
286
+ some gating issues so that is there.
287
+ contact_info: "If you find any issues or want to just holler at me, you can reach out to me - https://twitter.com/4evaBehindSOTA"
288
+
289
+ ### Library:
290
+ name: "peft"
291
+
292
+ ### Training procedure:
293
+ quantization_config:
294
+ load_in_8bit: False
295
+ load_in_4bit: True
296
+ llm_int8_threshold: 6.0
297
+ llm_int8_skip_modules: None
298
+ llm_int8_enable_fp32_cpu_offload: False
299
+ llm_int8_has_fp16_weight: False
300
+ bnb_4bit_quant_type: "nf4"
301
+ bnb_4bit_use_double_quant: False
302
+ bnb_4bit_compute_dtype: "float16"
303
+
304
+ ### Framework versions:
305
+ PEFT: "0.5.0.dev0"