divinit commited on
Commit
bfa026a
1 Parent(s): 7fa822b

Upload 9 files

Browse files
README.md ADDED
@@ -0,0 +1,211 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/Jq4vkcDakD">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Calder AI's 30B Lazarus GPTQ
21
+
22
+ These files are GPTQ 3bit model files for [Calder AI's 30B Lazarus](https://huggingface.co/CalderaAI/30B-Lazarus).
23
+
24
+ It is the result of quantising to 3bit using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
25
+
26
+ It was quantised to 3bit to hopefully allow loading on smaller GPUs, like the 3060 with 12GB VRAM.
27
+
28
+ ## Repositories available
29
+
30
+ * [Calder AI's 4-bit GPTQ model for GPU inference](https://huggingface.co/CalderaAI/30B-Lazarus-GPTQ4bit)
31
+ * [3-bit GPTQ model for GPU inference](https://huggingface.co/TheBloke/30B-Lazarus-3bit-GPTQ)
32
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/30B-Lazarus-GGML)
33
+ * [Calder AI's unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/CalderaAI/30B-Lazarus)
34
+
35
+ ## Prompt template
36
+
37
+ ```
38
+ Below is an instruction that describes a task. Write a response that appropriately completes the request
39
+
40
+ ### Instruction: prompt
41
+
42
+ ### Response:
43
+ ```
44
+
45
+ ## How to easily download and use this model in text-generation-webui
46
+
47
+ Please make sure you're using the latest version of text-generation-webui
48
+
49
+ 1. Click the **Model tab**.
50
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/30B-Lazarus-3bit-GPTQ`.
51
+ 3. Click **Download**.
52
+ 4. The model will start downloading. Once it's finished it will say "Done"
53
+ 5. In the top left, click the refresh icon next to **Model**.
54
+ 6. In the **Model** dropdown, choose the model you just downloaded: `30B-Lazarus-3bit-GPTQ`
55
+ 7. The model will automatically load, and is now ready for use!
56
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
57
+ * Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
58
+ 9. Once you're ready, click the **Text Generation tab** and enter a prompt to get started!
59
+
60
+ ## How to use this GPTQ model from Python code
61
+
62
+ First make sure you have [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) installed:
63
+
64
+ `pip install auto-gptq`
65
+
66
+ Then try the following example code:
67
+
68
+ ```python
69
+ from transformers import AutoTokenizer, pipeline, logging
70
+ from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
71
+ import argparse
72
+
73
+ model_name_or_path = "TheBloke/30B-Lazarus-3bit-GPTQ"
74
+ model_basename = "gptq_model-3bit-128g"
75
+
76
+ use_triton = False
77
+
78
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
79
+
80
+ model = AutoGPTQForCausalLM.from_quantized(model_name_or_path,
81
+ model_basename=model_basename,
82
+ use_safetensors=True,
83
+ trust_remote_code=False,
84
+ device="cuda:0",
85
+ use_triton=use_triton,
86
+ quantize_config=None)
87
+
88
+ # Note: check the prompt template is correct for this model.
89
+ prompt = "Tell me about AI"
90
+ prompt_template=f'''USER: {prompt}
91
+ ASSISTANT:'''
92
+
93
+ print("\n\n*** Generate:")
94
+
95
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
96
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
97
+ print(tokenizer.decode(output[0]))
98
+
99
+ # Inference can also be done using transformers' pipeline
100
+
101
+ # Prevent printing spurious transformers error when using pipeline with AutoGPTQ
102
+ logging.set_verbosity(logging.CRITICAL)
103
+
104
+ print("*** Pipeline:")
105
+ pipe = pipeline(
106
+ "text-generation",
107
+ model=model,
108
+ tokenizer=tokenizer,
109
+ max_new_tokens=512,
110
+ temperature=0.7,
111
+ top_p=0.95,
112
+ repetition_penalty=1.15
113
+ )
114
+
115
+ print(pipe(prompt_template)[0]['generated_text'])
116
+ ```
117
+
118
+ ## Provided files
119
+
120
+ **gptq_model-3bit-128g.safetensors**
121
+
122
+ This will work with AutoGPTQ, ExLlama, and CUDA versions of GPTQ-for-LLaMa. There are reports of issues with Triton mode of recent GPTQ-for-LLaMa. If you have issues, please use AutoGPTQ instead.
123
+
124
+ * `gptq_model-3bit-128g.safetensors`
125
+ * Works with AutoGPTQ in CUDA or Triton modes.
126
+ * Will not work with ExLlama as it's a 3-bit model.
127
+ * Works with GPTQ-for-LLaMa in CUDA mode. May have issues with GPTQ-for-LLaMa Triton mode.
128
+ * Works with text-generation-webui, including one-click-installers.
129
+ * Parameters: Groupsize = 128. Act Order / desc_act = False.
130
+
131
+ <!-- footer start -->
132
+ ## Discord
133
+
134
+ For further support, and discussions on these models and AI in general, join us at:
135
+
136
+ [TheBloke AI's Discord server](https://discord.gg/Jq4vkcDakD)
137
+
138
+ ## Thanks, and how to contribute.
139
+
140
+ Thanks to the [chirper.ai](https://chirper.ai) team!
141
+
142
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
143
+
144
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
145
+
146
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
147
+
148
+ * Patreon: https://patreon.com/TheBlokeAI
149
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
150
+
151
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
152
+
153
+ **Patreon special mentions**: Mano Prime, Fen Risland, Derek Yates, Preetika Verma, webtim, Sean Connelly, Alps Aficionado, Karl Bernard, Junyu Yang, Nathan LeClaire, Chris McCloskey, Lone Striker, Asp the Wyvern, Eugene Pentland, Imad Khwaja, trip7s trip, WelcomeToTheClub, John Detwiler, Artur Olbinski, Khalefa Al-Ahmad, Trenton Dambrowitz, Talal Aujan, Kevin Schuppel, Luke Pendergrass, Pyrater, Joseph William Delisle, terasurfer , vamX, Gabriel Puliatti, David Flickinger, Jonathan Leane, Iucharbius , Luke, Deep Realms, Cory Kujawski, ya boyyy, Illia Dulskyi, senxiiz, Johann-Peter Hartmann, John Villwock, K, Ghost , Spiking Neurons AB, Nikolai Manek, Rainer Wilmers, Pierre Kircher, biorpg, Space Cruiser, Ai Maven, subjectnull, Willem Michiel, Ajan Kanaga, Kalila, chris gileta, Oscar Rangel.
154
+
155
+ Thank you to all my generous patrons and donaters!
156
+
157
+ <!-- footer end -->
158
+
159
+ # Original model card: Calder AI's 30B Lazarus
160
+
161
+
162
+ ## 30B-Lazarus
163
+
164
+ ## Composition:
165
+ [] = applied as LoRA to a composite model | () = combined as composite models
166
+
167
+ [SuperCOT([gtp4xalpaca(manticorechatpygalpha+vicunaunlocked)]+[StoryV2(kaiokendev-SuperHOT-LoRA-prototype30b-8192)])]
168
+
169
+ This model is the result of an experimental use of LoRAs on language models and model merges that are not the base HuggingFace-format LLaMA model they were intended for.
170
+ The desired outcome is to additively apply desired features without paradoxically watering down a model's effective behavior.
171
+
172
+ Potential limitations - LoRAs applied on top of each other may intercompete.
173
+
174
+ Subjective results - very promising. Further experimental tests and objective tests are required.
175
+
176
+ Instruct and Setup Suggestions:
177
+
178
+ Alpaca instruct is primary, Vicuna instruct format may work.
179
+ If using KoboldAI or Text-Generation-WebUI, recommend switching between Godlike and Storywriter presets and adjusting output length + instructions in memory.
180
+ Other presets as well as custom settings can yield highly different results, especially Temperature.
181
+ If poking it with a stick doesn't work try poking harder.
182
+
183
+ ## Language Models and LoRAs Used Credits:
184
+
185
+ manticore-30b-chat-pyg-alpha [Epoch0.4] by openaccess-ai-collective
186
+
187
+ https://huggingface.co/openaccess-ai-collective/manticore-30b-chat-pyg-alpha
188
+
189
+ SuperCOT-LoRA [30B] by kaiokendev
190
+
191
+ https://huggingface.co/kaiokendev/SuperCOT-LoRA
192
+
193
+ Storytelling-LLaMa-LoRA [30B, Version 2] by GamerUnTouch
194
+
195
+ https://huggingface.co/GamerUntouch/Storytelling-LLaMa-LoRAs
196
+
197
+ SuperHOT Prototype [30b 8k ctx] by kaiokendev
198
+
199
+ https://huggingface.co/kaiokendev/SuperHOT-LoRA-prototype
200
+
201
+ ChanSung's GPT4-Alpaca-LoRA
202
+ https://huggingface.co/chansung/gpt4-alpaca-lora-30b
203
+
204
+ Neko-Institute-of-Science's Vicuna Unlocked LoRA (Checkpoint 46080)
205
+ https://huggingface.co/Neko-Institute-of-Science/VicUnLocked-30b-LoRA
206
+
207
+ Also thanks to Meta for LLaMA.
208
+
209
+ Each model and LoRA was hand picked and considered for what it could contribute to this ensemble.
210
+ Thanks to each and every one of you for your incredible work developing some of the best things
211
+ to come out of this community.
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "30B-Lazarus",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "bos_token_id": 1,
7
+ "eos_token_id": 2,
8
+ "hidden_act": "silu",
9
+ "hidden_size": 6656,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 17920,
12
+ "max_position_embeddings": 2048,
13
+ "max_sequence_length": 2048,
14
+ "model_type": "llama",
15
+ "num_attention_heads": 52,
16
+ "num_hidden_layers": 60,
17
+ "pad_token_id": 0,
18
+ "rms_norm_eps": 1e-06,
19
+ "tie_word_embeddings": false,
20
+ "torch_dtype": "float16",
21
+ "transformers_version": "4.28.1",
22
+ "use_cache": true,
23
+ "vocab_size": 32000
24
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.28.1"
7
+ }
quantize_config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bits": 3,
3
+ "group_size": 128,
4
+ "damp_percent": 0.01,
5
+ "desc_act": false,
6
+ "sym": true,
7
+ "true_sequential": true,
8
+ "model_name_or_path": null,
9
+ "model_file_base_name": null
10
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": true,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e556afd44213b6bd1be2b850ebbbd98f5481437a8021afaf58ee7fb1818d347
3
+ size 499723
tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": true,
3
+ "add_eos_token": false,
4
+ "bos_token": {
5
+ "__type": "AddedToken",
6
+ "content": "<s>",
7
+ "lstrip": false,
8
+ "normalized": true,
9
+ "rstrip": false,
10
+ "single_word": false
11
+ },
12
+ "clean_up_tokenization_spaces": false,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "</s>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "model_max_length": 2048,
22
+ "pad_token": null,
23
+ "sp_model_kwargs": {},
24
+ "tokenizer_class": "LlamaTokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<unk>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }