TheBloke commited on
Commit
5d91b2e
1 Parent(s): 80b2bea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -23
README.md CHANGED
@@ -60,6 +60,7 @@ pip3 install git+https://github.com/huggingface/transformers
60
  ## Repositories available
61
 
62
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
 
63
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
64
 
65
  ## Prompt template: Llama-2-Chat
@@ -104,7 +105,7 @@ Each separate quant is in a different branch. See below for instructions on fet
104
  - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
105
  - With Git, you can clone a branch with:
106
  ```
107
- git clone --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ`
108
  ```
109
  - In Python Transformers code, the branch is the `revision` parameter; see below.
110
 
@@ -116,23 +117,7 @@ It is strongly recommended to use the text-generation-webui one-click-installers
116
 
117
  ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
118
 
119
- ExLlama has now been updated to support Llama 2 70B, but you will need to update ExLlama to the latest version.
120
-
121
- By default text-generation-webui installs a pre-compiled wheel for ExLlama. Until text-generation-webui updates to reflect the ExLlama changes - which hopefully won't be long - you must uninstall that and then update ExLlama in the `text-generation-webui/repositories` directory. ExLlama will then compile its kernel on model load.
122
-
123
- Note that this requires that your system is capable of compiling CUDA extensions, which may be an issue on Windows.
124
-
125
- Instructions for Linux One Click Installer:
126
-
127
- 1. Change directory into the text-generation-webui main folder: `cd /path/to/text-generation-webui`
128
- 2. Activate the conda env of text-generation-webui:
129
- ```
130
- source "installer_files/conda/etc/profile.d/conda.sh"
131
- conda activate installer_files/env
132
- ```
133
- 3. Run: `pip3 uninstall exllama`
134
- 4. Run: `cd repositories/exllama` followed by `git pull` to update exllama.
135
- 6. Now launch text-generation-webui and follow the instructions below for downloading and running the model. ExLlama should build its kernel when the model first loads.
136
 
137
  ### Downloading and running the model in text-generation-webui
138
 
@@ -152,16 +137,16 @@ conda activate installer_files/env
152
 
153
  ## How to use this GPTQ model from Python code
154
 
155
- First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
156
 
157
  ```
158
- GITHUB_ACTIONS=true pip3 install auto-gptq
159
  ```
160
 
161
  You also need the latest Transformers code from Github:
162
 
163
  ```
164
- pip3 install git+https://github.com/huggingface/transformers
165
  ```
166
 
167
  You must set `inject_fused_attention=False` as shown below.
@@ -241,7 +226,7 @@ The files provided will work with AutoGPTQ (CUDA and Triton modes), GPTQ-for-LLa
241
 
242
  ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
243
 
244
- Please see the Provided Files table above for per-file compatibility.
245
 
246
  <!-- footer start -->
247
  ## Discord
@@ -265,7 +250,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
265
 
266
  **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
267
 
268
- **Patreon special mentions**: Space Cruiser, Nikolai Manek, Sam, Chris McCloskey, Rishabh Srivastava, Kalila, Spiking Neurons AB, Khalefa Al-Ahmad, WelcomeToTheClub, Chadd, Lone Striker, Viktor Bowallius, Edmond Seymore, Ai Maven, Chris Smitley, Dave, Alexandros Triantafyllidis, Luke @flexchar, Elle, ya boyyy, Talal Aujan, Alex , Jonathan Leane, Deep Realms, Randy H, subjectnull, Preetika Verma, Joseph William Delisle, Michael Levine, chris gileta, K, Oscar Rangel, LangChain4j, Trenton Dambrowitz, Eugene Pentland, Johann-Peter Hartmann, Femi Adebogun, Illia Dulskyi, senxiiz, Daniel P. Andersen, Sean Connelly, Artur Olbinski, RoA, Mano Prime, Derek Yates, Raven Klaugh, David Flickinger, Willem Michiel, Pieter, Willian Hasse, vamX, Luke Pendergrass, webtim, Ghost , Rainer Wilmers, Nathan LeClaire, Will Dee, Cory Kujawski, John Detwiler, Fred von Graf, biorpg, Iucharbius , Imad Khwaja, Pierre Kircher, terasurfer , Asp the Wyvern, John Villwock, theTransient, zynix , Gabriel Tamborski, Fen Risland, Gabriel Puliatti, Matthew Berman, Pyrater, SuperWojo, Stephen Murray, Karl Bernard, Ajan Kanaga, Greatston Gnanesh, Junyu Yang.
269
 
270
  Thank you to all my generous patrons and donaters!
271
 
 
60
  ## Repositories available
61
 
62
  * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ)
63
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference.](https://huggingface.co/TheBloke/Llama-2-70B-chat-GGML)
64
  * [Original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Llama-2-70B-chat-fp16)
65
 
66
  ## Prompt template: Llama-2-Chat
 
105
  - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/Llama-2-70B-chat-GPTQ:gptq-4bit-32g-actorder_True`
106
  - With Git, you can clone a branch with:
107
  ```
108
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/Llama-2-70B-chat-GPTQ
109
  ```
110
  - In Python Transformers code, the branch is the `revision` parameter; see below.
111
 
 
117
 
118
  ### Use ExLlama (4-bit models only) - recommended option if you have enough VRAM for 4-bit
119
 
120
+ ExLlama has now been updated to support Llama 2 70B. Make sure you're using the latest version of ExLlama, and text-generation-webui if you're using that.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
  ### Downloading and running the model in text-generation-webui
123
 
 
137
 
138
  ## How to use this GPTQ model from Python code
139
 
140
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed, version 0.3.1 or 0.3.2 or later:
141
 
142
  ```
143
+ pip3 install auto-gptq
144
  ```
145
 
146
  You also need the latest Transformers code from Github:
147
 
148
  ```
149
+ pip3 install "transformers>=4.31.0"
150
  ```
151
 
152
  You must set `inject_fused_attention=False` as shown below.
 
226
 
227
  ExLlama is now compatible with Llama 2 70B models, as of [this commit](https://github.com/turboderp/exllama/commit/b3aea521859b83cfd889c4c00c05a323313b7fee).
228
 
229
+ Please see the Provided Files table above for per-file compatibility.
230
 
231
  <!-- footer start -->
232
  ## Discord
 
250
 
251
  **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
252
 
253
+ **Patreon special mentions**: Willem Michiel, Ajan Kanaga, Cory Kujawski, Alps Aficionado, Nikolai Manek, Jonathan Leane, Stanislav Ovsiannikov, Michael Levine, Luke Pendergrass, Sid, K, Gabriel Tamborski, Clay Pascal, Kalila, William Sang, Will Dee, Pieter, Nathan LeClaire, ya boyyy, David Flickinger, vamX, Derek Yates, Fen Risland, Jeffrey Morgan, webtim, Daniel P. Andersen, Chadd, Edmond Seymore, Pyrater, Olusegun Samson, Lone Striker, biorpg, alfie_i, Mano Prime, Chris Smitley, Dave, zynix, Trenton Dambrowitz, Johann-Peter Hartmann, Magnesian, Spencer Kim, John Detwiler, Iucharbius, Gabriel Puliatti, LangChain4j, Luke @flexchar, Vadim, Rishabh Srivastava, Preetika Verma, Ai Maven, Femi Adebogun, WelcomeToTheClub, Leonard Tan, Imad Khwaja, Steven Wood, Stefan Sabev, Sebastain Graf, usrbinkat, Dan Guido, Sam, Eugene Pentland, Mandus, transmissions 11, Slarti, Karl Bernard, Spiking Neurons AB, Artur Olbinski, Joseph William Delisle, ReadyPlayerEmma, Olakabola, Asp the Wyvern, Space Cruiser, Matthew Berman, Randy H, subjectnull, danny, John Villwock, Illia Dulskyi, Rainer Wilmers, theTransient, Pierre Kircher, Alexandros Triantafyllidis, Viktor Bowallius, terasurfer, Deep Realms, SuperWojo, senxiiz, Oscar Rangel, Alex, Stephen Murray, Talal Aujan, Raven Klaugh, Sean Connelly, Raymond Fosdick, Fred von Graf, chris gileta, Junyu Yang, Elle
254
 
255
  Thank you to all my generous patrons and donaters!
256