TheBloke commited on
Commit
04272da
1 Parent(s): 8547510

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +204 -106
README.md CHANGED
@@ -1,10 +1,33 @@
1
  ---
 
 
 
 
 
 
 
2
  inference: false
3
  language:
4
  - en
5
  license: other
 
 
6
  model_type: llama
7
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  tags:
9
  - upstage
10
  - llama
@@ -29,156 +52,201 @@ tags:
29
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
30
  <!-- header end -->
31
 
32
- # Upstage's Llama 30B Instruct 2048 GPTQ
 
 
 
 
 
33
 
34
- These files are GPTQ model files for [Upstage's Llama 30B Instruct 2048](https://huggingface.co/upstage/llama-30b-instruct-2048).
35
 
36
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
37
 
38
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
39
 
 
 
40
  ## Repositories available
41
 
 
42
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ)
43
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGML)
44
- * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
 
45
 
 
46
  ## Prompt template: Orca-Hashes
47
 
48
  ```
49
  ### System:
50
- {System}
51
 
52
  ### User:
53
  {prompt}
54
 
55
  ### Assistant:
 
56
  ```
57
 
58
- ## Provided files
 
 
 
 
59
 
60
  Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
61
 
62
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
63
 
64
- | Branch | Bits | Group Size | Act Order (desc_act) | File Size | ExLlama Compatible? | Made With | Description |
65
- | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
66
- | main | 4 | None | True | 16.94 GB | True | AutoGPTQ | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
67
- | gptq-4bit-32g-actorder_True | 4 | 32 | True | 19.44 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
68
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 18.18 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
69
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 17.55 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
70
- | gptq-8bit--1g-actorder_True | 8 | None | True | 32.99 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
71
- | gptq-8bit-128g-actorder_False | 8 | 128 | False | 33.73 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
72
- | gptq-3bit-128g-actorder_False | 3 | 128 | False | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
73
- | gptq-3bit-128g-actorder_True | 3 | 128 | True | 13.51 GB | False | AutoGPTQ | 3-bit, with group size 128g and act-order. Higher quality than 128g-False but poor AutoGPTQ CUDA speed. |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
 
 
 
75
  ## How to download from branches
76
 
77
- - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/upstage-llama-30b-instruct-2048-GPTQ:gptq-4bit-32g-actorder_True`
78
  - With Git, you can clone a branch with:
79
  ```
80
- git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ`
81
  ```
82
  - In Python Transformers code, the branch is the `revision` parameter; see below.
83
-
 
84
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
85
 
86
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
87
 
88
- It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
89
 
90
  1. Click the **Model tab**.
91
  2. Under **Download custom model or LoRA**, enter `TheBloke/upstage-llama-30b-instruct-2048-GPTQ`.
92
- - To download from a specific branch, enter for example `TheBloke/upstage-llama-30b-instruct-2048-GPTQ:gptq-4bit-32g-actorder_True`
93
  - see Provided Files above for the list of branches for each option.
94
  3. Click **Download**.
95
- 4. The model will start downloading. Once it's finished it will say "Done"
96
  5. In the top left, click the refresh icon next to **Model**.
97
  6. In the **Model** dropdown, choose the model you just downloaded: `upstage-llama-30b-instruct-2048-GPTQ`
98
  7. The model will automatically load, and is now ready for use!
99
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
100
- * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
101
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
 
102
 
 
103
  ## How to use this GPTQ model from Python code
104
 
105
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
 
 
 
 
 
 
 
 
 
106
 
107
- `GITHUB_ACTIONS=true pip install auto-gptq`
 
 
 
 
 
108
 
109
- Then try the following example code:
 
 
 
 
 
 
 
 
110
 
111
  ```python
112
- from transformers import AutoTokenizer, pipeline, logging
113
- from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
114
 
115
  model_name_or_path = "TheBloke/upstage-llama-30b-instruct-2048-GPTQ"
116
- model_basename = "model"
117
-
118
- use_triton = False
 
 
 
119
 
120
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
121
 
122
- model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
123
- model_basename=model_basename,
124
- use_safetensors=True,
125
- trust_remote_code=False,
126
- device="cuda:0",
127
- use_triton=use_triton,
128
- quantize_config=None)
129
-
130
- """
131
- To download from a specific branch, use the revision parameter, as in this example:
132
-
133
- model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
134
- revision="gptq-4bit-32g-actorder_True",
135
- model_basename=model_basename,
136
- use_safetensors=True,
137
- trust_remote_code=False,
138
- device="cuda:0",
139
- quantize_config=None)
140
- """
141
-
142
  prompt = "Tell me about AI"
143
- system = "You are a helpful assistant"
144
  prompt_template=f'''### System:
145
- {system}
146
 
147
  ### User:
148
  {prompt}
149
 
150
- ### Assistant:'''
 
 
151
 
152
  print("\n\n*** Generate:")
153
 
154
  input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
155
- output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
156
  print(tokenizer.decode(output[0]))
157
 
158
  # Inference can also be done using transformers' pipeline
159
 
160
- # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
161
- logging.set_verbosity(logging.CRITICAL)
162
-
163
  print("*** Pipeline:")
164
  pipe = pipeline(
165
  "text-generation",
166
  model=model,
167
  tokenizer=tokenizer,
168
  max_new_tokens=512,
 
169
  temperature=0.7,
170
  top_p=0.95,
171
- repetition_penalty=1.15
 
172
  )
173
 
174
  print(pipe(prompt_template)[0]['generated_text'])
175
  ```
 
176
 
 
177
  ## Compatibility
178
 
179
- The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLaMa (only CUDA has been tested), and Occ4m's GPTQ-for-LLaMa fork.
180
 
181
- ExLlama works with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
 
 
 
182
 
183
  <!-- footer start -->
184
  <!-- 200823 -->
@@ -188,10 +256,12 @@ For further support, and discussions on these models and AI in general, join us
188
 
189
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
190
 
191
- ## Thanks, and how to contribute.
192
 
193
  Thanks to the [chirper.ai](https://chirper.ai) team!
194
 
 
 
195
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
196
 
197
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
@@ -203,7 +273,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
203
 
204
  **Special thanks to**: Aemon Algiz.
205
 
206
- **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
207
 
208
 
209
  Thank you to all my generous patrons and donaters!
@@ -214,72 +284,102 @@ And thank you again to a16z for their generous grant.
214
 
215
  # Original model card: Upstage's Llama 30B Instruct 2048
216
 
217
- ## Model Details
218
-
219
- ### Model Developers
220
- - [Upstage](https://en.upstage.ai)
221
-
222
- ### Backbone Model
223
- - [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
224
-
225
- ### Variations
226
- - It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct).
227
-
228
- ### Input
229
- - Models solely process textual input.
230
-
231
- ### Output
232
- - Models solely generate textual output.
233
 
234
- ### License
235
- - This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format.
236
 
237
- ### Where to send comments
238
- - Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions).
 
 
 
 
 
 
239
 
240
  ## Dataset Details
241
 
242
  ### Used Datasets
 
243
  - [openbookqa](https://huggingface.co/datasets/openbookqa)
244
  - [sciq](https://huggingface.co/datasets/sciq)
245
  - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
246
  - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
247
  - [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
 
248
 
249
- ## Hardware and Software
 
 
 
250
 
251
- ### Hardware
252
- - We utilized an A100 for training our model.
253
 
254
- ### Training Factors
255
- - We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace trainer](https://huggingface.co/docs/transformers/main_classes/trainer).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
256
 
257
  ## Evaluation Results
258
 
259
  ### Overview
260
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
261
- We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
262
- We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463).
 
263
 
264
  ### Main Results
265
- | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA |
266
- |-----------------------------------------------|---------|-------|-----------|-------|------------|
267
- | llama-65b-instruct (***Ours***, ***Local Reproduction***) | **69.4** | **67.6** | **86.5** | **64.9** | **58.8** |
268
- | llama-30b-instruct-2048 (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 |
269
- | Llama-2-70b-chat-hf | 66.8 | 64.6 | 85.9 | 63.9 | 52.8 |
270
- | llama-30b-instruct (***Ours***, ***Open LLM Leaderboard***) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 |
271
- | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
272
- | llama-65b | 62.1 | 57.6 | 84.3 | 63.4 | 43.0 |
273
-
274
- ### Scripts
 
 
 
275
  - Prepare evaluation environments:
276
  ```
277
  # clone the repository
278
  git clone https://github.com/EleutherAI/lm-evaluation-harness.git
279
-
280
  # check out the specific commit
281
  git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
282
-
283
  # change to the repository directory
284
  cd lm-evaluation-harness
285
  ```
@@ -287,11 +387,9 @@ cd lm-evaluation-harness
287
  ## Ethical Issues
288
 
289
  ### Ethical Considerations
290
- - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
291
 
292
  ## Contact Us
293
 
294
  ### Why Upstage LLM?
295
- - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. Our 30B model size **outperforms all models worldwide**, establishing itself as the leading performer. Recognizing the immense potential for private LLM adoption within companies, we invite you to effortlessly implement a private LLM and fine-tune it with your own data. For a seamless and tailored solution, please don't hesitate to reach out to us [(click here to mail)].
296
-
297
- [(click here to mail)]: mailto:contact@upstage.ai
 
1
  ---
2
+ base_model: https://huggingface.co/upstage/llama-30b-instruct-2048
3
+ datasets:
4
+ - sciq
5
+ - metaeval/ScienceQA_text_only
6
+ - GAIR/lima
7
+ - Open-Orca/OpenOrca
8
+ - openbookqa
9
  inference: false
10
  language:
11
  - en
12
  license: other
13
+ model_creator: upstage
14
+ model_name: Llama 30B Instruct 2048
15
  model_type: llama
16
  pipeline_tag: text-generation
17
+ prompt_template: '### System:
18
+
19
+ {system_message}
20
+
21
+
22
+ ### User:
23
+
24
+ {prompt}
25
+
26
+
27
+ ### Assistant:
28
+
29
+ '
30
+ quantized_by: TheBloke
31
  tags:
32
  - upstage
33
  - llama
 
52
  <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
53
  <!-- header end -->
54
 
55
+ # Llama 30B Instruct 2048 - GPTQ
56
+ - Model creator: [upstage](https://huggingface.co/upstage)
57
+ - Original model: [Llama 30B Instruct 2048](https://huggingface.co/upstage/llama-30b-instruct-2048)
58
+
59
+ <!-- description start -->
60
+ ## Description
61
 
62
+ This repo contains GPTQ model files for [Upstage's Llama 30B Instruct 2048](https://huggingface.co/upstage/llama-30b-instruct-2048).
63
 
64
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
65
 
66
  Many thanks to William Beauchamp from [Chai](https://chai-research.com/) for providing the hardware used to make and upload these files!
67
 
68
+ <!-- description end -->
69
+ <!-- repositories-available start -->
70
  ## Repositories available
71
 
72
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-AWQ)
73
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ)
74
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GGUF)
75
+ * [upstage's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/upstage/llama-30b-instruct-2048)
76
+ <!-- repositories-available end -->
77
 
78
+ <!-- prompt-template start -->
79
  ## Prompt template: Orca-Hashes
80
 
81
  ```
82
  ### System:
83
+ {system_message}
84
 
85
  ### User:
86
  {prompt}
87
 
88
  ### Assistant:
89
+
90
  ```
91
 
92
+ <!-- prompt-template end -->
93
+
94
+
95
+ <!-- README_GPTQ.md-provided-files start -->
96
+ ## Provided files and GPTQ parameters
97
 
98
  Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
99
 
100
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
101
 
102
+ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
103
+
104
+ <details>
105
+ <summary>Explanation of GPTQ parameters</summary>
106
+
107
+ - Bits: The bit size of the quantised model.
108
+ - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
109
+ - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
110
+ - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
111
+ - GPTQ dataset: The dataset used for quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
112
+ - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
113
+ - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama models in 4-bit.
114
+
115
+ </details>
116
+
117
+ | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
118
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
119
+ | [main](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/main) | 4 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 16.94 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
120
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 19.44 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
121
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 18.18 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
122
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 17.55 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
123
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 32.99 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
124
+ | [gptq-8bit-128g-actorder_False](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-8bit-128g-actorder_False) | 8 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 33.73 GB | No | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
125
+ | [gptq-3bit-128g-actorder_False](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-3bit-128g-actorder_False) | 3 | 128 | No | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.51 GB | No | 3-bit, with group size 128g but no act-order. Slightly higher VRAM requirements than 3-bit None. |
126
+ | [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.01 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 2048 | 13.51 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
127
 
128
+ <!-- README_GPTQ.md-provided-files end -->
129
+
130
+ <!-- README_GPTQ.md-download-from-branches start -->
131
  ## How to download from branches
132
 
133
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/upstage-llama-30b-instruct-2048-GPTQ:main`
134
  - With Git, you can clone a branch with:
135
  ```
136
+ git clone --single-branch --branch main https://huggingface.co/TheBloke/upstage-llama-30b-instruct-2048-GPTQ
137
  ```
138
  - In Python Transformers code, the branch is the `revision` parameter; see below.
139
+ <!-- README_GPTQ.md-download-from-branches end -->
140
+ <!-- README_GPTQ.md-text-generation-webui start -->
141
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
142
 
143
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
144
 
145
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
146
 
147
  1. Click the **Model tab**.
148
  2. Under **Download custom model or LoRA**, enter `TheBloke/upstage-llama-30b-instruct-2048-GPTQ`.
149
+ - To download from a specific branch, enter for example `TheBloke/upstage-llama-30b-instruct-2048-GPTQ:main`
150
  - see Provided Files above for the list of branches for each option.
151
  3. Click **Download**.
152
+ 4. The model will start downloading. Once it's finished it will say "Done".
153
  5. In the top left, click the refresh icon next to **Model**.
154
  6. In the **Model** dropdown, choose the model you just downloaded: `upstage-llama-30b-instruct-2048-GPTQ`
155
  7. The model will automatically load, and is now ready for use!
156
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
157
+ * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
158
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
159
+ <!-- README_GPTQ.md-text-generation-webui end -->
160
 
161
+ <!-- README_GPTQ.md-use-from-python start -->
162
  ## How to use this GPTQ model from Python code
163
 
164
+ ### Install the necessary packages
165
+
166
+ Requires: Transformers 4.32.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
167
+
168
+ ```shell
169
+ pip3 install transformers>=4.32.0 optimum>=1.12.0
170
+ pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
171
+ ```
172
+
173
+ If you have problems installing AutoGPTQ using the pre-built wheels, install it from source instead:
174
 
175
+ ```shell
176
+ pip3 uninstall -y auto-gptq
177
+ git clone https://github.com/PanQiWei/AutoGPTQ
178
+ cd AutoGPTQ
179
+ pip3 install .
180
+ ```
181
 
182
+ ### For CodeLlama models only: you must use Transformers 4.33.0 or later.
183
+
184
+ If 4.33.0 is not yet released when you read this, you will need to install Transformers from source:
185
+ ```shell
186
+ pip3 uninstall -y transformers
187
+ pip3 install git+https://github.com/huggingface/transformers.git
188
+ ```
189
+
190
+ ### You can then use the following code
191
 
192
  ```python
193
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
 
194
 
195
  model_name_or_path = "TheBloke/upstage-llama-30b-instruct-2048-GPTQ"
196
+ # To use a different branch, change revision
197
+ # For example: revision="main"
198
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
199
+ device_map="auto",
200
+ trust_remote_code=False,
201
+ revision="main")
202
 
203
  tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
204
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
205
  prompt = "Tell me about AI"
 
206
  prompt_template=f'''### System:
207
+ {system_message}
208
 
209
  ### User:
210
  {prompt}
211
 
212
+ ### Assistant:
213
+
214
+ '''
215
 
216
  print("\n\n*** Generate:")
217
 
218
  input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
219
+ output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
220
  print(tokenizer.decode(output[0]))
221
 
222
  # Inference can also be done using transformers' pipeline
223
 
 
 
 
224
  print("*** Pipeline:")
225
  pipe = pipeline(
226
  "text-generation",
227
  model=model,
228
  tokenizer=tokenizer,
229
  max_new_tokens=512,
230
+ do_sample=True,
231
  temperature=0.7,
232
  top_p=0.95,
233
+ top_k=40,
234
+ repetition_penalty=1.1
235
  )
236
 
237
  print(pipe(prompt_template)[0]['generated_text'])
238
  ```
239
+ <!-- README_GPTQ.md-use-from-python end -->
240
 
241
+ <!-- README_GPTQ.md-compatibility start -->
242
  ## Compatibility
243
 
244
+ The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
245
 
246
+ [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
247
+
248
+ [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
249
+ <!-- README_GPTQ.md-compatibility end -->
250
 
251
  <!-- footer start -->
252
  <!-- 200823 -->
 
256
 
257
  [TheBloke AI's Discord server](https://discord.gg/theblokeai)
258
 
259
+ ## Thanks, and how to contribute
260
 
261
  Thanks to the [chirper.ai](https://chirper.ai) team!
262
 
263
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
264
+
265
  I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
266
 
267
  If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
 
273
 
274
  **Special thanks to**: Aemon Algiz.
275
 
276
+ **Patreon special mentions**: Alicia Loh, Stephen Murray, K, Ajan Kanaga, RoA, Magnesian, Deo Leter, Olakabola, Eugene Pentland, zynix, Deep Realms, Raymond Fosdick, Elijah Stavena, Iucharbius, Erik Bjäreholt, Luis Javier Navarrete Lozano, Nicholas, theTransient, John Detwiler, alfie_i, knownsqashed, Mano Prime, Willem Michiel, Enrico Ros, LangChain4j, OG, Michael Dempsey, Pierre Kircher, Pedro Madruga, James Bentley, Thomas Belote, Luke @flexchar, Leonard Tan, Johann-Peter Hartmann, Illia Dulskyi, Fen Risland, Chadd, S_X, Jeff Scroggin, Ken Nordquist, Sean Connelly, Artur Olbinski, Swaroop Kallakuri, Jack West, Ai Maven, David Ziegler, Russ Johnson, transmissions 11, John Villwock, Alps Aficionado, Clay Pascal, Viktor Bowallius, Subspace Studios, Rainer Wilmers, Trenton Dambrowitz, vamX, Michael Levine, 준교 김, Brandon Frisco, Kalila, Trailburnt, Randy H, Talal Aujan, Nathan Dryer, Vadim, 阿明, ReadyPlayerEmma, Tiffany J. Kim, George Stoitzev, Spencer Kim, Jerry Meng, Gabriel Tamborski, Cory Kujawski, Jeffrey Morgan, Spiking Neurons AB, Edmond Seymore, Alexandros Triantafyllidis, Lone Striker, Cap'n Zoog, Nikolai Manek, danny, ya boyyy, Derek Yates, usrbinkat, Mandus, TL, Nathan LeClaire, subjectnull, Imad Khwaja, webtim, Raven Klaugh, Asp the Wyvern, Gabriel Puliatti, Caitlyn Gatomon, Joseph William Delisle, Jonathan Leane, Luke Pendergrass, SuperWojo, Sebastain Graf, Will Dee, Fred von Graf, Andrey, Dan Guido, Daniel P. Andersen, Nitin Borwankar, Elle, Vitor Caleffi, biorpg, jjj, NimbleBox.ai, Pieter, Matthew Berman, terasurfer, Michael Davis, Alex, Stanislav Ovsiannikov
277
 
278
 
279
  Thank you to all my generous patrons and donaters!
 
284
 
285
  # Original model card: Upstage's Llama 30B Instruct 2048
286
 
287
+ # LLaMa-30b-instruct-2048 model card
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
288
 
289
+ ## Model Details
 
290
 
291
+ * **Developed by**: [Upstage](https://en.upstage.ai)
292
+ * **Backbone Model**: [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
293
+ * **Variations**: It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct)
294
+ * **Language(s)**: English
295
+ * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
296
+ * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
297
+ * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
298
+ * **Contact**: For questions and comments about the model, please email [contact@upstage.ai](mailto:contact@upstage.ai)
299
 
300
  ## Dataset Details
301
 
302
  ### Used Datasets
303
+
304
  - [openbookqa](https://huggingface.co/datasets/openbookqa)
305
  - [sciq](https://huggingface.co/datasets/sciq)
306
  - [Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca)
307
  - [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only)
308
  - [GAIR/lima](https://huggingface.co/datasets/GAIR/lima)
309
+ - No other data was used except for the dataset mentioned above
310
 
311
+ ### Prompt Template
312
+ ```
313
+ ### System:
314
+ {System}
315
 
316
+ ### User:
317
+ {User}
318
 
319
+ ### Assistant:
320
+ {Assistant}
321
+ ```
322
+
323
+ ## Usage
324
+
325
+ - Tested on A100 80GB
326
+ - Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
327
+
328
+ ```python
329
+ import torch
330
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
331
+
332
+ tokenizer = AutoTokenizer.from_pretrained("upstage/llama-30b-instruct-2048")
333
+ model = AutoModelForCausalLM.from_pretrained(
334
+ "upstage/llama-30b-instruct-2048",
335
+ device_map="auto",
336
+ torch_dtype=torch.float16,
337
+ load_in_8bit=True,
338
+ rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
339
+ )
340
+
341
+ prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
342
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
343
+ del inputs["token_type_ids"]
344
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
345
+
346
+ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
347
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
348
+ ```
349
+
350
+ ## Hardware and Software
351
+
352
+ * **Hardware**: We utilized an A100x8 * 1 for training our model
353
+ * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer) / [HuggingFace Accelerate](https://huggingface.co/docs/accelerate/index)
354
 
355
  ## Evaluation Results
356
 
357
  ### Overview
358
  - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
359
+ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`
360
+ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
361
+ - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
362
 
363
  ### Main Results
364
+ | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
365
+ |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
366
+ | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(Ours, Open LLM Leaderboard) | **73** | **71.1** | **87.9** | **70.6** | **62.2** | | **7.44063** |
367
+ | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | | 7.24375 |
368
+ | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (Ours, Open LLM Leaderboard) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
369
+ | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
370
+ | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (***Ours***, ***Open LLM Leaderboard***) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
371
+ | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
372
+ | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
373
+ | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
374
+
375
+
376
+ ### Scripts for H4 Score Reproduction
377
  - Prepare evaluation environments:
378
  ```
379
  # clone the repository
380
  git clone https://github.com/EleutherAI/lm-evaluation-harness.git
 
381
  # check out the specific commit
382
  git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
 
383
  # change to the repository directory
384
  cd lm-evaluation-harness
385
  ```
 
387
  ## Ethical Issues
388
 
389
  ### Ethical Considerations
390
+ - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process
391
 
392
  ## Contact Us
393
 
394
  ### Why Upstage LLM?
395
+ - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)