Upload README.md
Browse files
README.md
CHANGED
@@ -113,10 +113,12 @@ Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with T
|
|
113 |
|
114 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
115 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
116 |
-
| [main](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 |
|
117 |
-
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 |
|
118 |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 39.64 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
119 |
-
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 41.52 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
|
|
|
|
|
120 |
|
121 |
<!-- README_GPTQ.md-provided-files end -->
|
122 |
|
@@ -294,7 +296,8 @@ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
|
|
294 |
|
295 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
296 |
|
297 |
-
prompt = "
|
|
|
298 |
prompt_template=f'''You are a helpful AI assistant.
|
299 |
|
300 |
USER: {prompt}
|
@@ -331,7 +334,7 @@ print(pipe(prompt_template)[0]['generated_text'])
|
|
331 |
|
332 |
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
|
333 |
|
334 |
-
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama
|
335 |
|
336 |
For a list of clients/servers, please see "Known compatible clients / servers", above.
|
337 |
<!-- README_GPTQ.md-compatibility end -->
|
|
|
113 |
|
114 |
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
|
115 |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
|
116 |
+
| [main](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/main) | 4 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.99 GB | Yes | 4-bit, with Act Order. No group size, to lower VRAM requirements. |
|
117 |
+
| [gptq-4bit-128g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-4bit-128g-actorder_True) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.91 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
|
118 |
| [gptq-3bit--1g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-3bit--1g-actorder_True) | 3 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 39.64 GB | No | 3-bit, with Act Order and no group size. Lowest possible VRAM requirements. May be lower quality than 3-bit 128g. |
|
119 |
+
| [gptq-3bit-128g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-3bit-128g-actorder_True) | 3 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 41.52 GB | No | 3-bit, with group size 128g and act-order. Higher quality than 128g-False. |
|
120 |
+
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.87 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
|
121 |
+
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/Rogue-Rose-103b-v0.2-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 48.87 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
|
122 |
|
123 |
<!-- README_GPTQ.md-provided-files end -->
|
124 |
|
|
|
296 |
|
297 |
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
298 |
|
299 |
+
prompt = "Write a story about llamas"
|
300 |
+
system_message = "You are a story writing assistant"
|
301 |
prompt_template=f'''You are a helpful AI assistant.
|
302 |
|
303 |
USER: {prompt}
|
|
|
334 |
|
335 |
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
|
336 |
|
337 |
+
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
|
338 |
|
339 |
For a list of clients/servers, please see "Known compatible clients / servers", above.
|
340 |
<!-- README_GPTQ.md-compatibility end -->
|