TheBloke commited on
Commit
fd078f5
1 Parent(s): a7d61e3

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -6
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  datasets:
3
  - PygmalionAI/PIPPA
 
 
 
 
4
  inference: false
5
  language:
6
  - en
@@ -98,12 +102,12 @@ All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches
98
 
99
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
100
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
101
- | main | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
102
- | gptq-4bit-32g-actorder_True | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
103
- | gptq-4bit-64g-actorder_True | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
104
- | gptq-4bit-128g-actorder_True | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
105
- | gptq-8bit--1g-actorder_True | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
106
- | gptq-8bit-128g-actorder_True | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
107
 
108
  <!-- README_GPTQ.md-provided-files end -->
109
 
 
1
  ---
2
  datasets:
3
  - PygmalionAI/PIPPA
4
+ - Open-Orca/OpenOrca
5
+ - Norquinal/claude_multiround_chat_30k
6
+ - jondurbin/airoboros-gpt4-1.4.1
7
+ - databricks/databricks-dolly-15k
8
  inference: false
9
  language:
10
  - en
 
102
 
103
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
104
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
105
+ | [main](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/main) | 4 | 128 | No | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
106
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
107
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
108
+ | [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
109
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
110
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Pygmalion-2-7B-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [wikitext](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-v1/test) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. Poor AutoGPTQ CUDA speed. |
111
 
112
  <!-- README_GPTQ.md-provided-files end -->
113