iproskurina commited on
Commit
3070dff
1 Parent(s): f713e7b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -38,13 +38,25 @@ The grouping size used for quantization is equal to 128.
38
 
39
  ### Install the necessary packages
40
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ```shell
42
- pip install accelerate==0.26.1 datasets==2.16.1 dill==0.3.7 gekko==1.0.6 multiprocess==0.70.15 peft==0.7.1 rouge==1.0.1 sentencepiece==0.1.99
43
- git clone https://github.com/upunaprosk/AutoGPTQ
44
  cd AutoGPTQ
45
- pip install -v .
 
46
  ```
47
- Recommended transformers version: 4.35.2.
48
 
49
  ### You can then use the following code
50
 
 
38
 
39
  ### Install the necessary packages
40
 
41
+ Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
42
+
43
+ ```shell
44
+ pip3 install --upgrade transformers optimum
45
+ # If using PyTorch 2.1 + CUDA 12.x:
46
+ pip3 install --upgrade auto-gptq
47
+ # or, if using PyTorch 2.1 + CUDA 11.x:
48
+ pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
49
+ ```
50
+
51
+ If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
52
+
53
  ```shell
54
+ pip3 uninstall -y auto-gptq
55
+ git clone https://github.com/PanQiWei/AutoGPTQ
56
  cd AutoGPTQ
57
+ git checkout v0.5.1
58
+ pip3 install .
59
  ```
 
60
 
61
  ### You can then use the following code
62