TheBloke commited on
Commit
29d9dac
1 Parent(s): 092c32a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -122,12 +122,12 @@ Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with T
122
 
123
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
124
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
125
- | [main](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
126
- | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
127
- | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
128
- | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
129
- | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
130
- | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [Korean Alpaca](https://huggingface.co/datasets/beomi/KoAlpaca-v1.1a/viewer/) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
131
 
132
  <!-- README_GPTQ.md-provided-files end -->
133
 
 
122
 
123
  | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
124
  | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
125
+ | [main](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 3.90 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
126
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.28 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
127
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.01 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
128
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.16 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
129
+ | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.62 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
130
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/Synatra-RP-Orca-2-7B-v0.1-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMWare Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.02 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
131
 
132
  <!-- README_GPTQ.md-provided-files end -->
133