TheBloke commited on
Commit
2966f1f
1 Parent(s): 61a6c1c

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -18
README.md CHANGED
@@ -45,9 +45,9 @@ Multiple GPTQ parameter permutations are provided; see Provided Files below for
45
  <!-- repositories-available start -->
46
  ## Repositories available
47
 
48
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ)
49
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-OASST-SFT-v10-GGUF)
50
- * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-13B-OASST-SFT-v10-GGML)
51
  * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10)
52
  <!-- repositories-available end -->
53
 
@@ -72,7 +72,7 @@ Multiple quantisation parameters are provided, to allow you to choose the best o
72
 
73
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
74
 
75
- All GPTQ files are made with AutoGPTQ.
76
 
77
  <details>
78
  <summary>Explanation of GPTQ parameters</summary>
@@ -89,22 +89,22 @@ All GPTQ files are made with AutoGPTQ.
89
 
90
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
91
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
92
- | main | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
93
- | gptq-4bit-32g-actorder_True | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
94
- | gptq-4bit-64g-actorder_True | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
95
- | gptq-4bit-128g-actorder_True | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
96
- | gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
97
- | gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
98
 
99
  <!-- README_GPTQ.md-provided-files end -->
100
 
101
  <!-- README_GPTQ.md-download-from-branches start -->
102
  ## How to download from branches
103
 
104
- - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ:gptq-4bit-32g-actorder_True`
105
  - With Git, you can clone a branch with:
106
  ```
107
- git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ
108
  ```
109
  - In Python Transformers code, the branch is the `revision` parameter; see below.
110
  <!-- README_GPTQ.md-download-from-branches end -->
@@ -116,16 +116,16 @@ Please make sure you're using the latest version of [text-generation-webui](http
116
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
117
 
118
  1. Click the **Model tab**.
119
- 2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ`.
120
- - To download from a specific branch, enter for example `TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ:gptq-4bit-32g-actorder_True`
121
  - see Provided Files above for the list of branches for each option.
122
  3. Click **Download**.
123
  4. The model will start downloading. Once it's finished it will say "Done".
124
  5. In the top left, click the refresh icon next to **Model**.
125
- 6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-13B-OASST-SFT-v10-GPTQ`
126
  7. The model will automatically load, and is now ready for use!
127
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
128
- * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
129
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
130
  <!-- README_GPTQ.md-text-generation-webui end -->
131
 
@@ -163,7 +163,7 @@ pip3 install git+https://github.com/huggingface/transformers.git
163
  ```python
164
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
165
 
166
- model_name_or_path = "TheBloke/CodeLlama-13B-OASST-SFT-v10-GPTQ"
167
  # To use a different branch, change revision
168
  # For example: revision="gptq-4bit-32g-actorder_True"
169
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
@@ -238,7 +238,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
238
 
239
  **Special thanks to**: Aemon Algiz.
240
 
241
- **Patreon special mentions**: Kacper Wikieł, knownsqashed, Leonard Tan, Asp the Wyvern, Daniel P. Andersen, Luke Pendergrass, Stanislav Ovsiannikov, RoA, Dave, Ai Maven, Kalila, Will Dee, Imad Khwaja, Nitin Borwankar, Joseph William Delisle, Tony Hughes, Cory Kujawski, Rishabh Srivastava, Russ Johnson, Stephen Murray, Lone Striker, Johann-Peter Hartmann, Elle, J, Deep Realms, SuperWojo, Raven Klaugh, Sebastain Graf, ReadyPlayerEmma, Alps Aficionado, Mano Prime, Derek Yates, Gabriel Puliatti, Mesiah Bishop, Magnesian, Sean Connelly, biorpg, Iucharbius, Olakabola, Fen Risland, Space Cruiser, theTransient, Illia Dulskyi, Thomas Belote, Spencer Kim, Pieter, John Detwiler, Fred von Graf, Michael Davis, Swaroop Kallakuri, subjectnull, Clay Pascal, Subspace Studios, Chris Smitley, Enrico Ros, usrbinkat, Steven Wood, alfie_i, David Ziegler, Willem Michiel, Matthew Berman, Andrey, Pyrater, Jeffrey Morgan, vamX, LangChain4j, Luke @flexchar, Trenton Dambrowitz, Pierre Kircher, Alex, Sam, James Bentley, Edmond Seymore, Eugene Pentland, Pedro Madruga, Rainer Wilmers, Dan Guido, Nathan LeClaire, Spiking Neurons AB, Talal Aujan, zynix, Artur Olbinski, Michael Levine, 阿明, K, John Villwock, Nikolai Manek, Femi Adebogun, senxiiz, Deo Leter, NimbleBox.ai, Viktor Bowallius, Geoffrey Montalvo, Mandus, Ajan Kanaga, ya boyyy, Jonathan Leane, webtim, Brandon Frisco, danny, Alexandros Triantafyllidis, Gabriel Tamborski, Randy H, terasurfer, Vadim, Junyu Yang, Vitor Caleffi, Chadd, transmissions 11
242
 
243
 
244
  Thank you to all my generous patrons and donaters!
 
45
  <!-- repositories-available start -->
46
  ## Repositories available
47
 
48
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ)
49
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGUF)
50
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference (deprecated)](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GGML)
51
  * [OpenAssistant's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/OpenAssistant/codellama-13b-oasst-sft-v10)
52
  <!-- repositories-available end -->
53
 
 
72
 
73
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
74
 
75
+ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
76
 
77
  <details>
78
  <summary>Explanation of GPTQ parameters</summary>
 
89
 
90
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
91
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
92
+ | [main](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
93
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
94
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
95
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
96
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
97
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Evol Instruct Code](https://huggingface.co/datasets/nickrosh/Evol-Instruct-Code-80k-v1) | 8192 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
98
 
99
  <!-- README_GPTQ.md-provided-files end -->
100
 
101
  <!-- README_GPTQ.md-download-from-branches start -->
102
  ## How to download from branches
103
 
104
+ - In text-generation-webui, you can add `:branch` to the end of the download name, eg `TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ:gptq-4bit-32g-actorder_True`
105
  - With Git, you can clone a branch with:
106
  ```
107
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ
108
  ```
109
  - In Python Transformers code, the branch is the `revision` parameter; see below.
110
  <!-- README_GPTQ.md-download-from-branches end -->
 
116
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
117
 
118
  1. Click the **Model tab**.
119
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ`.
120
+ - To download from a specific branch, enter for example `TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ:gptq-4bit-32g-actorder_True`
121
  - see Provided Files above for the list of branches for each option.
122
  3. Click **Download**.
123
  4. The model will start downloading. Once it's finished it will say "Done".
124
  5. In the top left, click the refresh icon next to **Model**.
125
+ 6. In the **Model** dropdown, choose the model you just downloaded: `CodeLlama-13B-oasst-sft-v10-GPTQ`
126
  7. The model will automatically load, and is now ready for use!
127
  8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
128
+ * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
129
  9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
130
  <!-- README_GPTQ.md-text-generation-webui end -->
131
 
 
163
  ```python
164
  from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
165
 
166
+ model_name_or_path = "TheBloke/CodeLlama-13B-oasst-sft-v10-GPTQ"
167
  # To use a different branch, change revision
168
  # For example: revision="gptq-4bit-32g-actorder_True"
169
  model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
 
238
 
239
  **Special thanks to**: Aemon Algiz.
240
 
241
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
242
 
243
 
244
  Thank you to all my generous patrons and donaters!