TheBloke commited on
Commit
6d91f11
1 Parent(s): 807d862

Initial GPTQ model commit

Browse files
Files changed (1) hide show
  1. README.md +173 -0
README.md ADDED
@@ -0,0 +1,173 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Henk717's Chronoboros 33B GPTQ
21
+
22
+ These files are GPTQ 4bit model files for [Henk717's Chronoboros 33B](https://huggingface.co/Henk717/chronoboros-33B).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GPTQ)
29
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/Chronoboros-33B-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Henk717/chronoboros-33B)
31
+
32
+ ## Prompt template: Vicuna
33
+
34
+ ```
35
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
36
+
37
+ USER: PROMPT
38
+ ASSISTANT:
39
+
40
+ ```
41
+
42
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
43
+
44
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
45
+
46
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you know how to make a manual install.
47
+
48
+ 1. Click the **Model tab**.
49
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/Chronoboros-33B-GPTQ`.
50
+ 3. Click **Download**.
51
+ 4. The model will start downloading. Once it's finished it will say "Done"
52
+ 5. In the top left, click the refresh icon next to **Model**.
53
+ 6. In the **Model** dropdown, choose the model you just downloaded: `Chronoboros-33B-GPTQ`
54
+ 7. The model will automatically load, and is now ready for use!
55
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
56
+ * Note that you do not need to set GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
57
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
58
+
59
+ ## How to use this GPTQ model from Python code
60
+
61
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
62
+
63
+ `GITHUB_ACTIONS=true pip install auto-gptq`
64
+
65
+ Then try the following example code:
66
+
67
+ ```python
68
+ from transformers import AutoTokenizer, pipeline, logging
69
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
70
+ import argparse
71
+
72
+ model_name_or_path = "TheBloke/Chronoboros-33B-GPTQ"
73
+ model_basename = "chronoboros-33b-GPTQ-4bit--1g.act.order"
74
+
75
+ use_triton = False
76
+
77
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
78
+
79
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
80
+ model_basename=model_basename,
81
+ use_safetensors=True,
82
+ trust_remote_code=False,
83
+ device="cuda:0",
84
+ use_triton=use_triton,
85
+ quantize_config=None)
86
+
87
+ prompt = "Tell me about AI"
88
+ prompt_template=f'''```
89
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.
90
+
91
+ USER: PROMPT
92
+ ASSISTANT:
93
+
94
+ ```
95
+ '''
96
+
97
+ print("\n\n*** Generate:")
98
+
99
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
100
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
101
+ print(tokenizer.decode(output[0]))
102
+
103
+ # Inference can also be done using transformers' pipeline
104
+
105
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
106
+ logging.set_verbosity(logging.CRITICAL)
107
+
108
+ print("*** Pipeline:")
109
+ pipe = pipeline(
110
+ "text-generation",
111
+ model=model,
112
+ tokenizer=tokenizer,
113
+ max_new_tokens=512,
114
+ temperature=0.7,
115
+ top_p=0.95,
116
+ repetition_penalty=1.15
117
+ )
118
+
119
+ print(pipe(prompt_template)[0]['generated_text'])
120
+ ```
121
+
122
+ ## Provided files
123
+
124
+ **chronoboros-33b-GPTQ-4bit--1g.act.order.safetensors**
125
+
126
+ This will work with AutoGPTQ and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
127
+
128
+ If a Llama model, it will also be supported by ExLlama, which will provide 2x speedup over AutoGPTQ and GPTQ-for-LLaMa.
129
+
130
+ It was created without group_size to lower VRAM requirements, and with --act-order (desc_act) to boost inference accuracy as much as possible.
131
+
132
+ * `chronoboros-33b-GPTQ-4bit--1g.act.order.safetensors`
133
+ * Works with AutoGPTQ in CUDA or Triton modes.
134
+ * [ExLlama](https://github.com/turboderp/exllama) supports Llama 4-bit GPTQs, and will provide 2x speedup over AutoGPTQ and GPTQ-for-LLaMa.
135
+ * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
136
+ * Works with text-generation-webui, including one-click-installers.
137
+ * Parameters: Groupsize = -1. Act Order / desc_act = True.
138
+
139
+ <!-- footer start -->
140
+ ## Discord
141
+
142
+ For further support, and discussions on these models and AI in general, join us at:
143
+
144
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
145
+
146
+ ## Thanks, and how to contribute.
147
+
148
+ Thanks to the [chirper.ai](https://chirper.ai) team!
149
+
150
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
151
+
152
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
153
+
154
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
155
+
156
+ * Patreon: https://patreon.com/TheBlokeAI
157
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
158
+
159
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz.
160
+
161
+ **Patreon special mentions**: RoA, Lone Striker, Gabriel Puliatti, Derek Yates, Randy H, Jonathan Leane, Eugene Pentland, Karl Bernard, Viktor Bowallius, senxiiz, Daniel P. Andersen, Pierre Kircher, Deep Realms, Cory Kujawski, Oscar Rangel, Fen Risland, Ajan Kanaga, LangChain4j, webtim, Nikolai Manek, Trenton Dambrowitz, Raven Klaugh, Kalila, Khalefa Al-Ahmad, Chris McCloskey, Luke @flexchar, Ai Maven, Dave, Asp the Wyvern, Sean Connelly, Imad Khwaja, Space Cruiser, Rainer Wilmers, subjectnull, Alps Aficionado, Willian Hasse, Fred von Graf, Artur Olbinski, Johann-Peter Hartmann, WelcomeToTheClub, Willem Michiel, Michael Levine, Iucharbius , Spiking Neurons AB, K, biorpg, John Villwock, Pyrater, Greatston Gnanesh, Mano Prime, Junyu Yang, Stephen Murray, John Detwiler, Luke Pendergrass, terasurfer , Pieter, zynix , Edmond Seymore, theTransient, Nathan LeClaire, vamX, Kevin Schuppel, Preetika Verma, ya boyyy, Alex , SuperWojo, Ghost , Joseph William Delisle, Matthew Berman, Talal Aujan, chris gileta, Illia Dulskyi.
162
+
163
+ Thank you to all my generous patrons and donaters!
164
+
165
+ <!-- footer end -->
166
+
167
+ # Original model card: Henk717's Chronoboros 33B
168
+
169
+ This model was the result of a 50/50 average weight merge between Airoboros-33B-1.4 and Chronos-33B.
170
+
171
+ License is inhereted from all merged models, which includes the LLaMA license requiring you to own a license to use the LLaMA models.
172
+
173
+ If you have such a license grant from Facebook you can request access to this model.