TheBloke commited on
Commit
9541a50
1 Parent(s): dd24e9a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +522 -0
README.md ADDED
@@ -0,0 +1,522 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: KoboldAI/LLaMA2-13B-Estopia
3
+ inference: false
4
+ license: cc-by-nc-4.0
5
+ model_creator: KoboldAI
6
+ model_name: Llama2 13B Estopia
7
+ model_type: llama
8
+ prompt_template: 'Below is an instruction that describes a task. Write a response
9
+ that appropriately completes the request.
10
+
11
+
12
+ ### Instruction:
13
+
14
+ {prompt}
15
+
16
+
17
+ ### Response:
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ tags:
22
+ - mergekit
23
+ - merge
24
+ ---
25
+ <!-- markdownlint-disable MD041 -->
26
+
27
+ <!-- header start -->
28
+ <!-- 200823 -->
29
+ <div style="width: auto; margin-left: auto; margin-right: auto">
30
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
31
+ </div>
32
+ <div style="display: flex; justify-content: space-between; width: 100%;">
33
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
34
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
35
+ </div>
36
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
37
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
38
+ </div>
39
+ </div>
40
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
41
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
42
+ <!-- header end -->
43
+
44
+ # Llama2 13B Estopia - GPTQ
45
+ - Model creator: [KoboldAI](https://huggingface.co/KoboldAI)
46
+ - Original model: [Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
47
+
48
+ <!-- description start -->
49
+ # Description
50
+
51
+ This repo contains GPTQ model files for [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
52
+
53
+ Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
54
+
55
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
56
+
57
+ <!-- description end -->
58
+ <!-- repositories-available start -->
59
+ ## Repositories available
60
+
61
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-AWQ)
62
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ)
63
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GGUF)
64
+ * [KoboldAI's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia)
65
+ <!-- repositories-available end -->
66
+
67
+ <!-- prompt-template start -->
68
+ ## Prompt template: Alpaca
69
+
70
+ ```
71
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
72
+
73
+ ### Instruction:
74
+ {prompt}
75
+
76
+ ### Response:
77
+
78
+ ```
79
+
80
+ <!-- prompt-template end -->
81
+ <!-- licensing start -->
82
+ ## Licensing
83
+
84
+ The creator of the source model has listed its license as `cc-by-nc-4.0`, and this quantization has therefore used that same license.
85
+
86
+ As this model is based on Llama 2, it is also subject to the Meta Llama 2 license terms, and the license files for that are additionally included. It should therefore be considered as being claimed to be licensed under both licenses. I contacted Hugging Face for clarification on dual licensing but they do not yet have an official position. Should this change, or should Meta provide any feedback on this situation, I will update this section accordingly.
87
+
88
+ In the meantime, any questions regarding licensing, and in particular how these two licenses might interact, should be directed to the original model repository: [KoboldAI's Llama2 13B Estopia](https://huggingface.co/KoboldAI/LLaMA2-13B-Estopia).
89
+ <!-- licensing end -->
90
+
91
+ <!-- README_GPTQ.md-compatible clients start -->
92
+ ## Known compatible clients / servers
93
+
94
+ GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
95
+
96
+ These GPTQ models are known to work in the following inference servers/webuis.
97
+
98
+ - [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
99
+ - [KoboldAI United](https://github.com/henk717/koboldai)
100
+ - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
101
+ - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
102
+
103
+ This may not be a complete list; if you know of others, please let me know!
104
+ <!-- README_GPTQ.md-compatible clients end -->
105
+
106
+ <!-- README_GPTQ.md-provided-files start -->
107
+ ## Provided files, and GPTQ parameters
108
+
109
+ Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
110
+
111
+ Each separate quant is in a different branch. See below for instructions on fetching from different branches.
112
+
113
+ Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
114
+
115
+ <details>
116
+ <summary>Explanation of GPTQ parameters</summary>
117
+
118
+ - Bits: The bit size of the quantised model.
119
+ - GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
120
+ - Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
121
+ - Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
122
+ - GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
123
+ - Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
124
+ - ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
125
+
126
+ </details>
127
+
128
+ | Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
129
+ | ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
130
+ | [main](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.26 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
131
+ | [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.00 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
132
+ | [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.36 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
133
+ | [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 13.65 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
134
+ | [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 14.54 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
135
+ | [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.51 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
136
+
137
+ <!-- README_GPTQ.md-provided-files end -->
138
+
139
+ <!-- README_GPTQ.md-download-from-branches start -->
140
+ ## How to download, including from branches
141
+
142
+ ### In text-generation-webui
143
+
144
+ To download from the `main` branch, enter `TheBloke/LLaMA2-13B-Estopia-GPTQ` in the "Download model" box.
145
+
146
+ To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/LLaMA2-13B-Estopia-GPTQ:gptq-4bit-32g-actorder_True`
147
+
148
+ ### From the command line
149
+
150
+ I recommend using the `huggingface-hub` Python library:
151
+
152
+ ```shell
153
+ pip3 install huggingface-hub
154
+ ```
155
+
156
+ To download the `main` branch to a folder called `LLaMA2-13B-Estopia-GPTQ`:
157
+
158
+ ```shell
159
+ mkdir LLaMA2-13B-Estopia-GPTQ
160
+ huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GPTQ --local-dir LLaMA2-13B-Estopia-GPTQ --local-dir-use-symlinks False
161
+ ```
162
+
163
+ To download from a different branch, add the `--revision` parameter:
164
+
165
+ ```shell
166
+ mkdir LLaMA2-13B-Estopia-GPTQ
167
+ huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir LLaMA2-13B-Estopia-GPTQ --local-dir-use-symlinks False
168
+ ```
169
+
170
+ <details>
171
+ <summary>More advanced huggingface-cli download usage</summary>
172
+
173
+ If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
174
+
175
+ The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
176
+
177
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
178
+
179
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
180
+
181
+ ```shell
182
+ pip3 install hf_transfer
183
+ ```
184
+
185
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
186
+
187
+ ```shell
188
+ mkdir LLaMA2-13B-Estopia-GPTQ
189
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/LLaMA2-13B-Estopia-GPTQ --local-dir LLaMA2-13B-Estopia-GPTQ --local-dir-use-symlinks False
190
+ ```
191
+
192
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
193
+ </details>
194
+
195
+ ### With `git` (**not** recommended)
196
+
197
+ To clone a specific branch with `git`, use a command like this:
198
+
199
+ ```shell
200
+ git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/LLaMA2-13B-Estopia-GPTQ
201
+ ```
202
+
203
+ Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
204
+
205
+ <!-- README_GPTQ.md-download-from-branches end -->
206
+ <!-- README_GPTQ.md-text-generation-webui start -->
207
+ ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
208
+
209
+ Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
210
+
211
+ It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
212
+
213
+ 1. Click the **Model tab**.
214
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/LLaMA2-13B-Estopia-GPTQ`.
215
+
216
+ - To download from a specific branch, enter for example `TheBloke/LLaMA2-13B-Estopia-GPTQ:gptq-4bit-32g-actorder_True`
217
+ - see Provided Files above for the list of branches for each option.
218
+
219
+ 3. Click **Download**.
220
+ 4. The model will start downloading. Once it's finished it will say "Done".
221
+ 5. In the top left, click the refresh icon next to **Model**.
222
+ 6. In the **Model** dropdown, choose the model you just downloaded: `LLaMA2-13B-Estopia-GPTQ`
223
+ 7. The model will automatically load, and is now ready for use!
224
+ 8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
225
+
226
+ - Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
227
+
228
+ 9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
229
+
230
+ <!-- README_GPTQ.md-text-generation-webui end -->
231
+
232
+ <!-- README_GPTQ.md-use-from-tgi start -->
233
+ ## Serving this model from Text Generation Inference (TGI)
234
+
235
+ It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
236
+
237
+ Example Docker parameters:
238
+
239
+ ```shell
240
+ --model-id TheBloke/LLaMA2-13B-Estopia-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
241
+ ```
242
+
243
+ Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
244
+
245
+ ```shell
246
+ pip3 install huggingface-hub
247
+ ```
248
+
249
+ ```python
250
+ from huggingface_hub import InferenceClient
251
+
252
+ endpoint_url = "https://your-endpoint-url-here"
253
+
254
+ prompt = "Tell me about AI"
255
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
256
+
257
+ ### Instruction:
258
+ {prompt}
259
+
260
+ ### Response:
261
+ '''
262
+
263
+ client = InferenceClient(endpoint_url)
264
+ response = client.text_generation(
265
+ prompt_template,
266
+ max_new_tokens=128,
267
+ do_sample=True,
268
+ temperature=0.7,
269
+ top_p=0.95,
270
+ top_k=40,
271
+ repetition_penalty=1.1
272
+ )
273
+
274
+ print(f"Model output: {response}")
275
+ ```
276
+ <!-- README_GPTQ.md-use-from-tgi end -->
277
+ <!-- README_GPTQ.md-use-from-python start -->
278
+ ## Python code example: inference from this GPTQ model
279
+
280
+ ### Install the necessary packages
281
+
282
+ Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
283
+
284
+ ```shell
285
+ pip3 install --upgrade transformers optimum
286
+ # If using PyTorch 2.1 + CUDA 12.x:
287
+ pip3 install --upgrade auto-gptq
288
+ # or, if using PyTorch 2.1 + CUDA 11.x:
289
+ pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
290
+ ```
291
+
292
+ If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
293
+
294
+ ```shell
295
+ pip3 uninstall -y auto-gptq
296
+ git clone https://github.com/PanQiWei/AutoGPTQ
297
+ cd AutoGPTQ
298
+ git checkout v0.5.1
299
+ pip3 install .
300
+ ```
301
+
302
+ ### Example Python code
303
+
304
+ ```python
305
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
306
+
307
+ model_name_or_path = "TheBloke/LLaMA2-13B-Estopia-GPTQ"
308
+ # To use a different branch, change revision
309
+ # For example: revision="gptq-4bit-32g-actorder_True"
310
+ model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
311
+ device_map="auto",
312
+ trust_remote_code=False,
313
+ revision="main")
314
+
315
+ tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
316
+
317
+ prompt = "Write a story about llamas"
318
+ system_message = "You are a story writing assistant"
319
+ prompt_template=f'''Below is an instruction that describes a task. Write a response that appropriately completes the request.
320
+
321
+ ### Instruction:
322
+ {prompt}
323
+
324
+ ### Response:
325
+ '''
326
+
327
+ print("\n\n*** Generate:")
328
+
329
+ input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
330
+ output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
331
+ print(tokenizer.decode(output[0]))
332
+
333
+ # Inference can also be done using transformers' pipeline
334
+
335
+ print("*** Pipeline:")
336
+ pipe = pipeline(
337
+ "text-generation",
338
+ model=model,
339
+ tokenizer=tokenizer,
340
+ max_new_tokens=512,
341
+ do_sample=True,
342
+ temperature=0.7,
343
+ top_p=0.95,
344
+ top_k=40,
345
+ repetition_penalty=1.1
346
+ )
347
+
348
+ print(pipe(prompt_template)[0]['generated_text'])
349
+ ```
350
+ <!-- README_GPTQ.md-use-from-python end -->
351
+
352
+ <!-- README_GPTQ.md-compatibility start -->
353
+ ## Compatibility
354
+
355
+ The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
356
+
357
+ [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
358
+
359
+ For a list of clients/servers, please see "Known compatible clients / servers", above.
360
+ <!-- README_GPTQ.md-compatibility end -->
361
+
362
+ <!-- footer start -->
363
+ <!-- 200823 -->
364
+ ## Discord
365
+
366
+ For further support, and discussions on these models and AI in general, join us at:
367
+
368
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
369
+
370
+ ## Thanks, and how to contribute
371
+
372
+ Thanks to the [chirper.ai](https://chirper.ai) team!
373
+
374
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
375
+
376
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
377
+
378
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
379
+
380
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
381
+
382
+ * Patreon: https://patreon.com/TheBlokeAI
383
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
384
+
385
+ **Special thanks to**: Aemon Algiz.
386
+
387
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
388
+
389
+
390
+ Thank you to all my generous patrons and donaters!
391
+
392
+ And thank you again to a16z for their generous grant.
393
+
394
+ <!-- footer end -->
395
+
396
+ # Original model card: KoboldAI's Llama2 13B Estopia
397
+
398
+ # Introduction
399
+ - Estopia is a model focused on improving the dialogue and prose returned when using the instruct format. As a side benefit, character cards and similar seem to have also improved, remembering details well in many cases.
400
+ - It focuses on "guided narratives" - using instructions to guide or explore fictional stories, where you act as a guide for the AI to narrate and fill in the details.
401
+ - It has primarily been tested around prose, using instructions to guide narrative, detail retention and "neutrality" - in particular with regards to plot armour. Unless you define different rules for your adventure / narrative with instructions, it should be realistic in the responses provided.
402
+ - It has been tested using different modes, such as instruct, chat, adventure and story modes - and should be able to do them all to a degree, with it's strengths being instruct and adventure, with story being a close second.
403
+ # Usage
404
+ - The Estopia model has been tested primarily using the Alpaca format, but with the range of models included likely has some understanding of others. Some examples of tested formats are below:
405
+ - ```\n### Instruction:\nWhat colour is the sky?\n### Response:\nThe sky is...```
406
+ - ```<Story text>\n***\nWrite a summary of the text above\n***\nThe story starts by...```
407
+ - Using the Kobold Lite AI adventure mode
408
+ - ```User:Hello there!\nAssistant:Good morning...\n```
409
+ - For settings, the following are recommended for general use:
410
+ - Temperature: 0.8-1.2
411
+ - Min P: 0.05-0.1
412
+ - Max P: 0.92, or 1 if using a Min P greater than 0
413
+ - Top K: 0
414
+ - Response length: Higher than your usual amount most likely - for example a common value selected is 512.
415
+ - Note: Response lengths are not guaranteed to always be this length. On occasion, responses may be shorter if they convey the response entirely, other times they could be upwards of this value. It depends mostly on the character card, instructions, etc.
416
+ - Rep Pen: 1.1
417
+ - Rep Pen Range: 2 or 3x your response length
418
+ - Stopping tokens (Not needed, but can help if the AI is writing too much):
419
+ - ```##||$||---||$||ASSISTANT:||$||[End||$||</s>``` - A single string for Kobold Lite combining the ones below
420
+ - ```##```
421
+ - ```---```
422
+ - ```ASSISTANT:```
423
+ - ```[End```
424
+ - ```</s>```
425
+ - The settings above should provide a generally good experience balancing instruction following and creativity. Generally the higher you set the temperature, the greater the creativity and higher chance of logical errors when providing responses from the AI.
426
+ # Recipe
427
+ This model was made in three stages, along with many experimental stages which will be skipped for brevity. The first was internally referred to as EstopiaV9, which has a high degree of instruction following and creativity in responses, though they were generally shorter and a little more restricted in the scope of outputs, but conveyed nuance better.
428
+ ```yaml
429
+ merge_method: task_arithmetic
430
+ base_model: TheBloke/Llama-2-13B-fp16
431
+ models:
432
+ - model: TheBloke/Llama-2-13B-fp16
433
+ - model: Undi95/UtopiaXL-13B
434
+ parameters:
435
+ weight: 1.0
436
+ - model: Doctor-Shotgun/cat-v1.0-13b
437
+ parameters:
438
+ weight: 0.02
439
+ - model: PygmalionAI/mythalion-13b
440
+ parameters:
441
+ weight: 0.10
442
+ - model: Undi95/Emerhyst-13B
443
+ parameters:
444
+ weight: 0.05
445
+ - model: CalderaAI/13B-Thorns-l2
446
+ parameters:
447
+ weight: 0.05
448
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
449
+ parameters:
450
+ weight: 0.20
451
+ dtype: float16
452
+ ```
453
+ The second part of the merge was known as EstopiaV13. This produced responses which were long, but tended to write beyond good stopping points for further instructions to be added as it leant heavily on novel style prose. It did however benefit from a greater degree of neutrality as described above, and retained many of the detail tracking abilities of V9.
454
+ ```yaml
455
+ merge_method: task_arithmetic
456
+ base_model: TheBloke/Llama-2-13B-fp16
457
+ models:
458
+ - model: TheBloke/Llama-2-13B-fp16
459
+ - model: Undi95/UtopiaXL-13B
460
+ parameters:
461
+ weight: 1.0
462
+ - model: Doctor-Shotgun/cat-v1.0-13b
463
+ parameters:
464
+ weight: 0.01
465
+ - model: chargoddard/rpguild-chatml-13b
466
+ parameters:
467
+ weight: 0.02
468
+ - model: PygmalionAI/mythalion-13b
469
+ parameters:
470
+ weight: 0.08
471
+ - model: CalderaAI/13B-Thorns-l2
472
+ parameters:
473
+ weight: 0.02
474
+ - model: KoboldAI/LLaMA2-13B-Tiefighter
475
+ parameters:
476
+ weight: 0.20
477
+ dtype: float16
478
+ ```
479
+ The third step was a merge between the two to retain the benefits of both as much as possible. This was performed using the dare merging technique.
480
+ ```yaml
481
+ # task-arithmetic style
482
+ models:
483
+ - model: EstopiaV9
484
+ parameters:
485
+ weight: 1
486
+ density: 1
487
+ - model: EstopiaV13
488
+ parameters:
489
+ weight: 0.05
490
+ density: 0.30
491
+ merge_method: dare_ties
492
+ base_model: TheBloke/Llama-2-13B-fp16
493
+ parameters:
494
+ int8_mask: true
495
+ dtype: bfloat16
496
+ ```
497
+ # Model selection
498
+ - Undi95/UtopiaXL-13B
499
+ - Solid all around base for models, with the ability to write longer responses and generally good retension to detail.
500
+ - Doctor-Shotgun/cat-v1.0-13b
501
+ - A medical focused model which is added to focus a little more on the human responses, such as for psycology.
502
+ - PygmalionAI/mythalion-13b
503
+ - A roleplay and instruct focused model, which improves attentiveness to character card details and the variety of responses
504
+ - Undi95/Emerhyst-13B
505
+ - A roleplay but also longer form response model. It can be quite variable, but helps add to the depth and possible options the AI can respond with during narratives.
506
+ - CalderaAI/13B-Thorns-l2
507
+ - A neutral and very attentive model. It is good at chat and following instructions, which help benefit these modes.
508
+ - KoboldAI/LLaMA2-13B-Tiefighter
509
+ - A solid all around model, focusing on story writing and adventure modes. It provides all around benefits to creativity and the prose in models, along with adventure mode support.
510
+ - chargoddard/rpguild-chatml-13b
511
+ - A roleplay model, which introduces new data and also improves the detail retention in longer narratives.
512
+ # Notes
513
+ - With the differing models inside, this model will not have perfect end of sequence tokens which is a problem many merges can share. While attempts have been made to minimise this, you may occasionally get oddly behaving tokens - this should be possible to resolve with a quick manual edit once and the model should pick up on it.
514
+ - Chat is one of the least tested areas for this model. It works fairly well, but it can be quite character card dependant.
515
+ - This is a narrative and prose focused model. As a result, it can and will talk for you if guided to do so (such as asking it to act as a co-author or narrator) within instructions or other contexts. This can be mitigated mostly by adding instructions to limit this, or using chat mode instead.
516
+ # Future areas
517
+ - Llava
518
+ - Some success has been had with merging the llava lora on this. While no in depth testing has been performed, more narrative responses based on the images could be obtained - though there were drawbacks in the form of degraded performance in other areas, and hallucinations due to the fictional focus of this model.
519
+ - Stheno
520
+ - A merge which has similar promise from Sao. Some merge attempts have been made between the two and were promising, but not entirely consistent at the moment. With some possible refinement, this could produce an even stronger model.
521
+ - DynamicFactor
522
+ - All the merges used have been based on llama two in this merge, but a dare merge with dynamic factor (an attempted refinement of llama two) showed a beneficial improvement to the instruction abilities of the model, along with lengthy responses. It lost a little of the variety of responses, so perhaps if a balance of it could be added the instruction abilities and reasoning could be improved even further.