alvarobartt HF staff commited on
Commit
3217560
•
1 Parent(s): f2a10bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -48
README.md CHANGED
@@ -113,58 +113,16 @@ For more information please refer to the original model card [`meta-llama/Meta-L
113
 
114
  Llama 3.1 405B Instruct has been quantized using [AutoAWQ](https://github.com/casperhansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
115
 
116
- In order to quantize Llama 3.1 405B Instruct, we had to first install `torch` and `autoawq` as follows:
117
-
118
- ```bash
119
- pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade
120
- ```
121
-
122
- Otherwise the quantization may fail, since the AutoAWQ kernels are built with PyTorch 2.2.1, meaning that those will break with PyTorch 2.3.0.
123
-
124
- Then we install the latest version of `transformers` as follows:
125
-
126
- ```bash
127
- pip install "transformers>=4.43.0" --upgrade
128
- ```
129
-
130
- And then we can run the following script, adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py) as follows:
131
-
132
- ```python
133
- from awq import AutoAWQForCausalLM
134
- from transformers import AutoTokenizer
135
-
136
- model_path = "meta-llama/Meta-Llama-3.1-405B-Instruct"
137
- quant_path = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
138
- quant_config = {
139
- "zero_point": True,
140
- "q_group_size": 128,
141
- "w_bit": 4,
142
- "version": "GEMM",
143
- }
144
-
145
- # Load model
146
- model = AutoAWQForCausalLM.from_pretrained(
147
- model_path, **{"low_cpu_mem_usage": True, "use_cache": False}
148
- )
149
- tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
150
-
151
- # Quantize
152
- model.quantize(tokenizer, quant_config=quant_config)
153
-
154
- # Save quantized model
155
- model.save_quantized(quant_path)
156
- tokenizer.save_pretrained(quant_path)
157
-
158
- print(f'Model is quantized and saved at "{quant_path}"')
159
- ```
160
-
161
  ## Quantized Model Usage
162
 
163
- In order to use the current quantized model, we offer support for different alternatives:
 
 
 
164
 
165
  ### 🤗 transformers
166
 
167
- To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, we can instantiate the AWQ model as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
168
 
169
  ```python
170
  import torch
@@ -235,4 +193,54 @@ The AutoAWQ script has been adapted from [AutoAWQ/examples/generate.py](https://
235
 
236
  ### 🤗 Text Generation Inference (TGI)
237
 
238
- Coming soon!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
 
114
  Llama 3.1 405B Instruct has been quantized using [AutoAWQ](https://github.com/casperhansen/AutoAWQ) from FP16 down to INT4 using the GEMM kernels performing zero-point quantization with a group size of 128.
115
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
  ## Quantized Model Usage
117
 
118
+ > [!NOTE]
119
+ > In order to run the inference with Llama 3.1 405B Instruct AWQ in INT4, around 203 GiB of VRAM are needed only for loading the model checkpoint, without including the KV cache or the CUDA graphs, meaning that there should be a bit over that VRAM available.
120
+
121
+ In order to use the current quantized model, support is offered for different solutions:
122
 
123
  ### 🤗 transformers
124
 
125
+ To run the inference on top of Llama 3.1 405B Instruct AWQ in INT4 precision, the AWQ model can be instantiated as any other causal language modeling model via `AutoModelForCausalLM` and run the inference normally.
126
 
127
  ```python
128
  import torch
 
193
 
194
  ### 🤗 Text Generation Inference (TGI)
195
 
196
+ Coming soon!
197
+
198
+ ## Quantization Reproduction
199
+
200
+ > [!NOTE]
201
+ > In order to quantize Llama 3.1 405B Instruct using AutoAWQ, you will need to use an instance with at least enough CPU RAM to fit the whole model i.e. ~800GiB, and an NVIDIA GPU with 80GiB of VRAM to quantize it.
202
+
203
+ In order to quantize Llama 3.1 405B Instruct, first install `torch` and `autoawq` as follows:
204
+
205
+ ```bash
206
+ pip install "torch>=2.2.0,<2.3.0" autoawq --upgrade
207
+ ```
208
+
209
+ Otherwise the quantization may fail, since the AutoAWQ kernels are built with PyTorch 2.2.1, meaning that those will break with PyTorch 2.3.0.
210
+
211
+ Then install the latest version of `transformers` as follows:
212
+
213
+ ```bash
214
+ pip install "transformers>=4.43.0" --upgrade
215
+ ```
216
+
217
+ And then, run the following script, adapted from [`AutoAWQ/examples/quantize.py`](https://github.com/casper-hansen/AutoAWQ/blob/main/examples/quantize.py) as follows:
218
+
219
+ ```python
220
+ from awq import AutoAWQForCausalLM
221
+ from transformers import AutoTokenizer
222
+
223
+ model_path = "meta-llama/Meta-Llama-3.1-405B-Instruct"
224
+ quant_path = "hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4"
225
+ quant_config = {
226
+ "zero_point": True,
227
+ "q_group_size": 128,
228
+ "w_bit": 4,
229
+ "version": "GEMM",
230
+ }
231
+
232
+ # Load model
233
+ model = AutoAWQForCausalLM.from_pretrained(
234
+ model_path, **{"low_cpu_mem_usage": True, "use_cache": False}
235
+ )
236
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
237
+
238
+ # Quantize
239
+ model.quantize(tokenizer, quant_config=quant_config)
240
+
241
+ # Save quantized model
242
+ model.save_quantized(quant_path)
243
+ tokenizer.save_pretrained(quant_path)
244
+
245
+ print(f'Model is quantized and saved at "{quant_path}"')
246
+ ```