TheBloke commited on
Commit
19e4bab
1 Parent(s): 367d501

Initial merged FP16 model commit

Browse files
Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ inference: false
3
+ license: other
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/theblokeai">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # LmSys' Vicuna 33B 1.3 (final) fp16
21
+
22
+ These files are GPTQ 4bit model files for [LmSys' Vicuna 33B 1.3 (final)](https://huggingface.co/lmsys/vicuna-33b-v1.3) merged with [Kaio Ken's SuperHOT 8K](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test).
23
+
24
+ [Kaio Ken's SuperHOT 30b LoRA](https://huggingface.co/kaiokendev/superhot-30b-8k-no-rlhf-test) is merged on to the base model, and then 8K context can be achieved during inference by using `trust_remote_code=True`.
25
+
26
+ Note that `config.json` has been set to a sequence length of 8192. This can be modified to 4096 if you want to try with a smaller sequence length.
27
+
28
+ ## Repositories available
29
+
30
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GPTQ)
31
+ * [2, 3, 4, 5, 6 and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-GGML)
32
+ * [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16)
33
+ * [Unquantised base fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/lmsys/vicuna-33b-v1.3)
34
+
35
+ ## How to use this model from Python code
36
+
37
+ First make sure you have Einops installed:
38
+
39
+ ```
40
+ pip3 install auto-gptq
41
+ ```
42
+
43
+ Then run the following code. `config.json` has been default to a sequence length of 8192, but you can also configure this in your Python code.
44
+
45
+ The provided modelling code, activated with `trust_remote_code=True` will automatically set the `scale` parameter from the configured `max_position_embeddings`. Eg for 8192, `scale` is set to `4`.
46
+
47
+ ```python
48
+ from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM, pipeline
49
+ import argparse
50
+
51
+ model_name_or_path = "TheBloke/Vicuna-33B-1-3-SuperHOT-8K-fp16"
52
+
53
+ use_triton = False
54
+
55
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
56
+
57
+ config = AutoConfig.from_pretrained(model_name_or_path, trust_remote_code=True)
58
+ # Change this to the sequence length you want
59
+ config.max_position_embeddings = 8192
60
+
61
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
62
+ config=config,
63
+ trust_remote_code=True,
64
+ device_map='auto')
65
+
66
+ # Note: check to confirm if this is correct prompt template is correct for this model!
67
+ prompt = "Tell me about AI"
68
+ prompt_template=f'''USER: {prompt}
69
+ ASSISTANT:'''
70
+
71
+ print("\n\n*** Generate:")
72
+
73
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
74
+ output = model.generate(inputs=input_ids, temperature=0.7, max_new_tokens=512)
75
+ print(tokenizer.decode(output[0]))
76
+
77
+ # Inference can also be done using transformers' pipeline
78
+
79
+ print("*** Pipeline:")
80
+ pipe = pipeline(
81
+ "text-generation",
82
+ model=model,
83
+ tokenizer=tokenizer,
84
+ max_new_tokens=512,
85
+ temperature=0.7,
86
+ top_p=0.95,
87
+ repetition_penalty=1.15
88
+ )
89
+
90
+ print(pipe(prompt_template)[0]['generated_text'])
91
+ ```
92
+
93
+ ## Using other UIs: monkey patch
94
+
95
+ Provided in the repo is `llama_rope_scaled_monkey_patch.py`, written by @kaiokendev.
96
+
97
+ It can be theoretically be added to any Python UI or custom code to enable the same result as `trust_remote_code=True`. I have not tested this, and it should be superseded by using `trust_remote_code=True`, but I include it for completeness and for interest.
98
+
99
+ <!-- footer start -->
100
+ ## Discord
101
+
102
+ For further support, and discussions on these models and AI in general, join us at:
103
+
104
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
105
+
106
+ ## Thanks, and how to contribute.
107
+
108
+ Thanks to the [chirper.ai](https://chirper.ai) team!
109
+
110
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
111
+
112
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
113
+
114
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
115
+
116
+ * Patreon: https://patreon.com/TheBlokeAI
117
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
118
+
119
+ **Special thanks to**: Luke from CarbonQuill, Aemon Algiz, Dmitriy Samsonov.
120
+
121
+ **Patreon special mentions**: zynix , ya boyyy, Trenton Dambrowitz, Imad Khwaja, Alps Aficionado, chris gileta, John Detwiler, Willem Michiel, RoA, Mano Prime, Rainer Wilmers, Fred von Graf, Matthew Berman, Ghost , Nathan LeClaire, Iucharbius , Ai Maven, Illia Dulskyi, Joseph William Delisle, Space Cruiser, Lone Striker, Karl Bernard, Eugene Pentland, Greatston Gnanesh, Jonathan Leane, Randy H, Pierre Kircher, Willian Hasse, Stephen Murray, Alex , terasurfer , Edmond Seymore, Oscar Rangel, Luke Pendergrass, Asp the Wyvern, Junyu Yang, David Flickinger, Luke, Spiking Neurons AB, subjectnull, Pyrater, Nikolai Manek, senxiiz, Ajan Kanaga, Johann-Peter Hartmann, Artur Olbinski, Kevin Schuppel, Derek Yates, Kalila, K, Talal Aujan, Khalefa Al-Ahmad, Gabriel Puliatti, John Villwock, WelcomeToTheClub, Daniel P. Andersen, Preetika Verma, Deep Realms, Fen Risland, trip7s trip, webtim, Sean Connelly, Michael Levine, Chris McCloskey, biorpg, vamX, Viktor Bowallius, Cory Kujawski.
122
+
123
+ Thank you to all my generous patrons and donaters!
124
+
125
+ <!-- footer end -->
126
+
127
+ # Original model card: Kaio Ken's SuperHOT 8K
128
+
129
+ ### SuperHOT Prototype 2 w/ 8K Context
130
+
131
+ This is a second prototype of SuperHOT, this time 30B with 8K context and no RLHF, using the same technique described in [the github blog](https://kaiokendev.github.io/til#extending-context-to-8k).
132
+ Tests have shown that the model does indeed leverage the extended context at 8K.
133
+
134
+ You will need to **use either the monkeypatch** or, if you are already using the monkeypatch, **change the scaling factor to 0.25 and the maximum sequence length to 8192**
135
+
136
+ #### Looking for Merged & Quantized Models?
137
+ - 30B 4-bit CUDA: [tmpupload/superhot-30b-8k-4bit-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-safetensors)
138
+ - 30B 4-bit CUDA 128g: [tmpupload/superhot-30b-8k-4bit-128g-safetensors](https://huggingface.co/tmpupload/superhot-30b-8k-4bit-128g-safetensors)
139
+
140
+
141
+ #### Training Details
142
+ I trained the LoRA with the following configuration:
143
+ - 1200 samples (~400 samples over 2048 sequence length)
144
+ - learning rate of 3e-4
145
+ - 3 epochs
146
+ - The exported modules are:
147
+ - q_proj
148
+ - k_proj
149
+ - v_proj
150
+ - o_proj
151
+ - no bias
152
+ - Rank = 4
153
+ - Alpha = 8
154
+ - no dropout
155
+ - weight decay of 0.1
156
+ - AdamW beta1 of 0.9 and beta2 0.99, epsilon of 1e-5
157
+ - Trained on 4-bit base model
158
+
159
+ # Original model card: LmSys' Vicuna 33B 1.3 (final)
160
+
161
+
162
+ # Vicuna Model Card
163
+
164
+ ## Model Details
165
+
166
+ Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
167
+
168
+ - **Developed by:** [LMSYS](https://lmsys.org/)
169
+ - **Model type:** An auto-regressive language model based on the transformer architecture.
170
+ - **License:** Non-commercial license
171
+ - **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
172
+
173
+ ### Model Sources
174
+
175
+ - **Repository:** https://github.com/lm-sys/FastChat
176
+ - **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
177
+ - **Paper:** https://arxiv.org/abs/2306.05685
178
+ - **Demo:** https://chat.lmsys.org/
179
+
180
+ ## Uses
181
+
182
+ The primary use of Vicuna is research on large language models and chatbots.
183
+ The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
184
+
185
+ ## How to Get Started with the Model
186
+
187
+ Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
188
+ APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
189
+
190
+ ## Training Details
191
+
192
+ Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
193
+ The training data is around 140K conversations collected from ShareGPT.com.
194
+ See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
195
+
196
+ ## Evaluation
197
+
198
+ Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf).
199
+
200
+ ## Difference between different versions of Vicuna
201
+ See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)