Initial merged FP16 model commit
Browse files
README.md
CHANGED
@@ -17,105 +17,17 @@ license: other
|
|
17 |
</div>
|
18 |
<!-- header end -->
|
19 |
|
20 |
-
# Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K
|
21 |
|
22 |
-
These files are
|
23 |
|
24 |
-
It is the result of
|
25 |
|
26 |
## Repositories available
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
|
30 |
-
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/
|
31 |
-
|
32 |
-
## How to easily download and use this model in text-generation-webui
|
33 |
-
|
34 |
-
Please make sure you're using the latest version of text-generation-webui
|
35 |
-
|
36 |
-
1. Click the **Model tab**.
|
37 |
-
2. Under **Download custom model or LoRA**, enter `TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ`.
|
38 |
-
3. Click **Download**.
|
39 |
-
4. The model will start downloading. Once it's finished it will say "Done"
|
40 |
-
5. In the top left, click the refresh icon next to **Model**.
|
41 |
-
6. In the **Model** dropdown, choose the model you just downloaded: `Wizard-Vicuna-30B-Superhot-8K-GPTQ`
|
42 |
-
7. The model will automatically load, and is now ready for use!
|
43 |
-
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
|
44 |
-
* Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
|
45 |
-
9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
|
46 |
-
|
47 |
-
## How to use this GPTQ model from Python code
|
48 |
-
|
49 |
-
First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
|
50 |
-
|
51 |
-
`pip install auto-gptq`
|
52 |
-
|
53 |
-
Then try the following example code:
|
54 |
-
|
55 |
-
```python
|
56 |
-
from transformers import AutoTokenizer, pipeline, logging
|
57 |
-
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
|
58 |
-
import argparse
|
59 |
-
|
60 |
-
model_name_or_path = "TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ"
|
61 |
-
model_basename = "wizard-vicuna-30b-superhot-8k-GPTQ-4bit--1g.act.order"
|
62 |
-
|
63 |
-
use_triton = False
|
64 |
-
|
65 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
|
66 |
-
|
67 |
-
model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
|
68 |
-
model_basename=model_basename,
|
69 |
-
use_safetensors=True,
|
70 |
-
trust_remote_code=False,
|
71 |
-
device="cuda:0",
|
72 |
-
use_triton=use_triton,
|
73 |
-
quantize_config=None)
|
74 |
-
|
75 |
-
# Note: check the prompt template is correct for this model.
|
76 |
-
prompt = "Tell me about AI"
|
77 |
-
prompt_template=f'''USER: {prompt}
|
78 |
-
ASSISTANT:'''
|
79 |
-
|
80 |
-
print("\n\n*** Generate:")
|
81 |
-
|
82 |
-
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
|
83 |
-
output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
|
84 |
-
print(tokenizer.decode(output[0]))
|
85 |
-
|
86 |
-
# Inference can also be done using transformers' pipeline
|
87 |
-
|
88 |
-
# Prevent printing spurious transformers error when using pipeline with AutoGPTQ
|
89 |
-
logging.set_verbosity(logging.CRITICAL)
|
90 |
-
|
91 |
-
print("*** Pipeline:")
|
92 |
-
pipe = pipeline(
|
93 |
-
"text-generation",
|
94 |
-
model=model,
|
95 |
-
tokenizer=tokenizer,
|
96 |
-
max_new_tokens=512,
|
97 |
-
temperature=0.7,
|
98 |
-
top_p=0.95,
|
99 |
-
repetition_penalty=1.15
|
100 |
-
)
|
101 |
-
|
102 |
-
print(pipe(prompt_template)[0]['generated_text'])
|
103 |
-
```
|
104 |
-
|
105 |
-
## Provided files
|
106 |
-
|
107 |
-
**wizard-vicuna-30b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors**
|
108 |
-
|
109 |
-
This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
|
110 |
-
|
111 |
-
It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
|
112 |
-
|
113 |
-
* `wizard-vicuna-30b-superhot-8k-GPTQ-4bit--1g.act.order.safetensors`
|
114 |
-
* Works with AutoGPTQ in CUDA or Triton modes.
|
115 |
-
* LLaMa models also work with [ExLlama](https://github.com/turboderp/exllama}, which usually provides much higher performance, and uses less VRAM, than AutoGPTQ.
|
116 |
-
* Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
|
117 |
-
* Works with text-generation-webui, including one-click-installers.
|
118 |
-
* Parameters: Groupsize = -1. Act Order / desc_act = True.
|
119 |
|
120 |
<!-- footer start -->
|
121 |
## Discord
|
|
|
17 |
</div>
|
18 |
<!-- header end -->
|
19 |
|
20 |
+
# Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K fp16
|
21 |
|
22 |
+
These files are pytorch format fp16 model files for [Eric Hartford's Wizard Vicuna 30B Uncensored merged with Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
|
23 |
|
24 |
+
It is the result of merging and/or converting the source repository to float16.
|
25 |
|
26 |
## Repositories available
|
27 |
|
28 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ)
|
29 |
* [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/none)
|
30 |
+
* [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
|
32 |
<!-- footer start -->
|
33 |
## Discord
|