Text Generation
Transformers
GGUF
English
amber
nlp
llm
TheBloke commited on
Commit
204a8ce
1 Parent(s): 01da553

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +453 -0
README.md ADDED
@@ -0,0 +1,453 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: LLM360/AmberChat
3
+ datasets:
4
+ - WizardLM/WizardLM_evol_instruct_V2_196k
5
+ - icybee/share_gpt_90k_v1
6
+ inference: false
7
+ language:
8
+ - en
9
+ library_name: transformers
10
+ license: apache-2.0
11
+ model_creator: LLM360
12
+ model_name: AmberChat
13
+ model_type: amber
14
+ pipeline_tag: text-generation
15
+ prompt_template: 'A chat between a curious user and an artificial intelligence assistant.
16
+ The assistant gives helpful, detailed, and polite answers to the user''s questions.
17
+ USER: {prompt} ASSISTANT:
18
+
19
+ '
20
+ quantized_by: TheBloke
21
+ tags:
22
+ - nlp
23
+ - llm
24
+ widget:
25
+ - example_title: example 1
26
+ output:
27
+ text: "Mounting a TV to drywall safely is crucial to ensure the longevity of your\
28
+ \ TV and the wall. Here's a step-by-step guide on how to do it safely:\n 1.\
29
+ \ Choose the right mount: Choose a mount that is compatible with your TV and\
30
+ \ the wall. Make sure the mount is rated for drywall and not just wood or concrete.\n\
31
+ \ 2. Prepare the wall: Before mounting the TV, make sure the wall is clean and\
32
+ \ free from any debris or paint. Use a stud finder to locate the studs in the\
33
+ \ wall.\n 3. Install the mount: Follow the manufacturer's instructions for mounting\
34
+ \ the TV to the wall. Use appropriate hardware and tools to secure the mount\
35
+ \ to the wall.\n 4. Level the TV: Level the TV on the mount using a spirit level\
36
+ \ or a leveling kit provided by the mount manufacturer.\n 5. Attach the TV to\
37
+ \ the mount: Attach the TV to the mount using the appropriate hardware and tools.\
38
+ \ Tighten the bolts and screws to ensure the TV is securely attached.\n 6. Connect\
39
+ \ the cables: Connect the TV cables to the appropriate ports on the back of\
40
+ \ the TV and the mount.\n 7. Test the mount: Test the mount to ensure it's secure\
41
+ \ and stable. Adjust the mount as needed to ensure the TV is level and secure.\n\
42
+ \ Mounting a TV to drywall safely is crucial to avoid damaging the wall or the\
43
+ \ TV. Follow these steps carefully and use appropriate tools and hardware to\
44
+ \ ensure a secure and stable installation."
45
+ text: How do I mount a tv to drywall safely?
46
+ - example_title: example 2
47
+ output:
48
+ text: The adjective that can be used to describe the opposite of calm is "anxious"
49
+ or "stressed." So, from happy to sad, we can say that happy is to sad as calm
50
+ is to anxious or stressed.
51
+ text: Happy is to sad as calm is to _.
52
+ ---
53
+ <!-- markdownlint-disable MD041 -->
54
+
55
+ <!-- header start -->
56
+ <!-- 200823 -->
57
+ <div style="width: auto; margin-left: auto; margin-right: auto">
58
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
59
+ </div>
60
+ <div style="display: flex; justify-content: space-between; width: 100%;">
61
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
62
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
63
+ </div>
64
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
65
+ <p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
66
+ </div>
67
+ </div>
68
+ <div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
69
+ <hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
70
+ <!-- header end -->
71
+
72
+ # AmberChat - GGUF
73
+ - Model creator: [LLM360](https://huggingface.co/LLM360)
74
+ - Original model: [AmberChat](https://huggingface.co/LLM360/AmberChat)
75
+
76
+ <!-- description start -->
77
+ ## Description
78
+
79
+ This repo contains GGUF format model files for [LLM360's AmberChat](https://huggingface.co/LLM360/AmberChat).
80
+
81
+ These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
82
+
83
+ <!-- description end -->
84
+ <!-- README_GGUF.md-about-gguf start -->
85
+ ### About GGUF
86
+
87
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
88
+
89
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
90
+
91
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
92
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
93
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
94
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
95
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
96
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
97
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
98
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
99
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
100
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
101
+
102
+ <!-- README_GGUF.md-about-gguf end -->
103
+ <!-- repositories-available start -->
104
+ ## Repositories available
105
+
106
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/AmberChat-AWQ)
107
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/AmberChat-GPTQ)
108
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/AmberChat-GGUF)
109
+ * [LLM360's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LLM360/AmberChat)
110
+ <!-- repositories-available end -->
111
+
112
+ <!-- prompt-template start -->
113
+ ## Prompt template: Vicuna
114
+
115
+ ```
116
+ A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:
117
+
118
+ ```
119
+
120
+ <!-- prompt-template end -->
121
+
122
+
123
+ <!-- compatibility_gguf start -->
124
+ ## Compatibility
125
+
126
+ These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
127
+
128
+ They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
129
+
130
+ ## Explanation of quantisation methods
131
+
132
+ <details>
133
+ <summary>Click to see details</summary>
134
+
135
+ The new methods available are:
136
+
137
+ * GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
138
+ * GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
139
+ * GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
140
+ * GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
141
+ * GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
142
+
143
+ Refer to the Provided Files table below to see what files use which methods, and how.
144
+ </details>
145
+ <!-- compatibility_gguf end -->
146
+
147
+ <!-- README_GGUF.md-provided-files start -->
148
+ ## Provided files
149
+
150
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
151
+ | ---- | ---- | ---- | ---- | ---- | ----- |
152
+ | [amberchat.Q2_K.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
153
+ | [amberchat.Q3_K_S.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
154
+ | [amberchat.Q3_K_M.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
155
+ | [amberchat.Q3_K_L.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
156
+ | [amberchat.Q4_0.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
157
+ | [amberchat.Q4_K_S.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
158
+ | [amberchat.Q4_K_M.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
159
+ | [amberchat.Q5_0.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
160
+ | [amberchat.Q5_K_S.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
161
+ | [amberchat.Q5_K_M.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
162
+ | [amberchat.Q6_K.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
163
+ | [amberchat.Q8_0.gguf](https://huggingface.co/TheBloke/AmberChat-GGUF/blob/main/amberchat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
164
+
165
+ **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
166
+
167
+
168
+
169
+ <!-- README_GGUF.md-provided-files end -->
170
+
171
+ <!-- README_GGUF.md-how-to-download start -->
172
+ ## How to download GGUF files
173
+
174
+ **Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
175
+
176
+ The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
177
+
178
+ * LM Studio
179
+ * LoLLMS Web UI
180
+ * Faraday.dev
181
+
182
+ ### In `text-generation-webui`
183
+
184
+ Under Download Model, you can enter the model repo: TheBloke/AmberChat-GGUF and below it, a specific filename to download, such as: amberchat.Q4_K_M.gguf.
185
+
186
+ Then click Download.
187
+
188
+ ### On the command line, including multiple files at once
189
+
190
+ I recommend using the `huggingface-hub` Python library:
191
+
192
+ ```shell
193
+ pip3 install huggingface-hub
194
+ ```
195
+
196
+ Then you can download any individual model file to the current directory, at high speed, with a command like this:
197
+
198
+ ```shell
199
+ huggingface-cli download TheBloke/AmberChat-GGUF amberchat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
200
+ ```
201
+
202
+ <details>
203
+ <summary>More advanced huggingface-cli download usage (click to read)</summary>
204
+
205
+ You can also download multiple files at once with a pattern:
206
+
207
+ ```shell
208
+ huggingface-cli download TheBloke/AmberChat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
209
+ ```
210
+
211
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
212
+
213
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
214
+
215
+ ```shell
216
+ pip3 install hf_transfer
217
+ ```
218
+
219
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
220
+
221
+ ```shell
222
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/AmberChat-GGUF amberchat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
223
+ ```
224
+
225
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
226
+ </details>
227
+ <!-- README_GGUF.md-how-to-download end -->
228
+
229
+ <!-- README_GGUF.md-how-to-run start -->
230
+ ## Example `llama.cpp` command
231
+
232
+ Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
233
+
234
+ ```shell
235
+ ./main -ngl 35 -m amberchat.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:"
236
+ ```
237
+
238
+ Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
239
+
240
+ Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically. Note that longer sequence lengths require much more resources, so you may need to reduce this value.
241
+
242
+ If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
243
+
244
+ For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
245
+
246
+ ## How to run in `text-generation-webui`
247
+
248
+ Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
249
+
250
+ ## How to run from Python code
251
+
252
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries. Note that at the time of writing (Nov 27th 2023), ctransformers has not been updated for some time and is not compatible with some recent models. Therefore I recommend you use llama-cpp-python.
253
+
254
+ ### How to load this model in Python code, using llama-cpp-python
255
+
256
+ For full documentation, please see: [llama-cpp-python docs](https://abetlen.github.io/llama-cpp-python/).
257
+
258
+ #### First install the package
259
+
260
+ Run one of the following commands, according to your system:
261
+
262
+ ```shell
263
+ # Base ctransformers with no GPU acceleration
264
+ pip install llama-cpp-python
265
+ # With NVidia CUDA acceleration
266
+ CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
267
+ # Or with OpenBLAS acceleration
268
+ CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
269
+ # Or with CLBLast acceleration
270
+ CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
271
+ # Or with AMD ROCm GPU acceleration (Linux only)
272
+ CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
273
+ # Or with Metal GPU acceleration for macOS systems only
274
+ CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
275
+
276
+ # In windows, to set the variables CMAKE_ARGS in PowerShell, follow this format; eg for NVidia CUDA:
277
+ $env:CMAKE_ARGS = "-DLLAMA_OPENBLAS=on"
278
+ pip install llama-cpp-python
279
+ ```
280
+
281
+ #### Simple llama-cpp-python example code
282
+
283
+ ```python
284
+ from llama_cpp import Llama
285
+
286
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
287
+ llm = Llama(
288
+ model_path="./amberchat.Q4_K_M.gguf", # Download the model file first
289
+ n_ctx=2048, # The max sequence length to use - note that longer sequence lengths require much more resources
290
+ n_threads=8, # The number of CPU threads to use, tailor to your system and the resulting performance
291
+ n_gpu_layers=35 # The number of layers to offload to GPU, if you have GPU acceleration available
292
+ )
293
+
294
+ # Simple inference example
295
+ output = llm(
296
+ "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {prompt} ASSISTANT:", # Prompt
297
+ max_tokens=512, # Generate up to 512 tokens
298
+ stop=["</s>"], # Example stop token - not necessarily correct for this specific model! Please check before using.
299
+ echo=True # Whether to echo the prompt
300
+ )
301
+
302
+ # Chat Completion API
303
+
304
+ llm = Llama(model_path="./amberchat.Q4_K_M.gguf", chat_format="llama-2") # Set chat_format according to the model you are using
305
+ llm.create_chat_completion(
306
+ messages = [
307
+ {"role": "system", "content": "You are a story writing assistant."},
308
+ {
309
+ "role": "user",
310
+ "content": "Write a story about llamas."
311
+ }
312
+ ]
313
+ )
314
+ ```
315
+
316
+ ## How to use with LangChain
317
+
318
+ Here are guides on using llama-cpp-python and ctransformers with LangChain:
319
+
320
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
321
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
322
+
323
+ <!-- README_GGUF.md-how-to-run end -->
324
+
325
+ <!-- footer start -->
326
+ <!-- 200823 -->
327
+ ## Discord
328
+
329
+ For further support, and discussions on these models and AI in general, join us at:
330
+
331
+ [TheBloke AI's Discord server](https://discord.gg/theblokeai)
332
+
333
+ ## Thanks, and how to contribute
334
+
335
+ Thanks to the [chirper.ai](https://chirper.ai) team!
336
+
337
+ Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
338
+
339
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
340
+
341
+ If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
342
+
343
+ Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
344
+
345
+ * Patreon: https://patreon.com/TheBlokeAI
346
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
347
+
348
+ **Special thanks to**: Aemon Algiz.
349
+
350
+ **Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
351
+
352
+
353
+ Thank you to all my generous patrons and donaters!
354
+
355
+ And thank you again to a16z for their generous grant.
356
+
357
+ <!-- footer end -->
358
+
359
+ <!-- original-model-card start -->
360
+ # Original model card: LLM360's AmberChat
361
+
362
+ # AmberChat
363
+
364
+
365
+ We present AmberChat, an instruction following model finetuned from [LLM360/Amber](https://huggingface.co/LLM360/Amber).
366
+
367
+ ## Model Description
368
+
369
+ - **Model type:** Language model with the same architecture as LLaMA-7B
370
+ - **Language(s) (NLP):** English
371
+ - **License:** Apache 2.0
372
+ - **Resources for more information:**
373
+ - [Metrics](https://github.com/LLM360/Analysis360)
374
+ - [Fully processed Amber pretraining data](https://huggingface.co/datasets/LLM360/AmberDatasets)
375
+
376
+
377
+ # Loading AmberChat
378
+
379
+ ```python
380
+ import torch
381
+ from transformers import LlamaTokenizer, LlamaForCausalLM
382
+
383
+ tokenizer = LlamaTokenizer.from_pretrained("LLM360/AmberChat")
384
+ model = LlamaForCausalLM.from_pretrained("LLM360/AmberChat")
385
+
386
+ #template adapated from fastchat
387
+ template= "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.\n### Human: Got any creative ideas for a 10 year old’s birthday?\n### Assistant: Of course! Here are some creative ideas for a 10-year-old's birthday party:\n1. Treasure Hunt: Organize a treasure hunt in your backyard or nearby park. Create clues and riddles for the kids to solve, leading them to hidden treasures and surprises.\n2. Science Party: Plan a science-themed party where kids can engage in fun and interactive experiments. You can set up different stations with activities like making slime, erupting volcanoes, or creating simple chemical reactions.\n3. Outdoor Movie Night: Set up a backyard movie night with a projector and a large screen or white sheet. Create a cozy seating area with blankets and pillows, and serve popcorn and snacks while the kids enjoy a favorite movie under the stars.\n4. DIY Crafts Party: Arrange a craft party where kids can unleash their creativity. Provide a variety of craft supplies like beads, paints, and fabrics, and let them create their own unique masterpieces to take home as party favors.\n5. Sports Olympics: Host a mini Olympics event with various sports and games. Set up different stations for activities like sack races, relay races, basketball shooting, and obstacle courses. Give out medals or certificates to the participants.\n6. Cooking Party: Have a cooking-themed party where the kids can prepare their own mini pizzas, cupcakes, or cookies. Provide toppings, frosting, and decorating supplies, and let them get hands-on in the kitchen.\n7. Superhero Training Camp: Create a superhero-themed party where the kids can engage in fun training activities. Set up an obstacle course, have them design their own superhero capes or masks, and organize superhero-themed games and challenges.\n8. Outdoor Adventure: Plan an outdoor adventure party at a local park or nature reserve. Arrange activities like hiking, nature scavenger hunts, or a picnic with games. Encourage exploration and appreciation for the outdoors.\nRemember to tailor the activities to the birthday child's interests and preferences. Have a great celebration!\n### Human: {prompt}\n### Assistant:"
388
+
389
+ prompt = "How do I mount a tv to drywall safely?"
390
+
391
+ input_str = template.format(prompt=prompt)
392
+ input_ids = tokenizer(input_str, return_tensors="pt").input_ids
393
+ outputs = model.generate(input_ids, max_length=1000)
394
+ print(tokenizer.batch_decode(outputs[:, input_ids.shape[1]:-1])[0].strip())
395
+ ```
396
+
397
+ Alternatively, you may use [FastChat](https://github.com/lm-sys/FastChat):
398
+ ```bash
399
+ python3 -m fastchat.serve.cli --model-path LLM360/AmberChat
400
+ ```
401
+
402
+ # AmberChat Finetuning Details
403
+
404
+ ## DataMix
405
+ | Subset | Number of rows | License |
406
+ | ----------- | ----------- | ----------- |
407
+ | WizardLM/WizardLM_evol_instruct_V2_196k | 143k | |
408
+ | icybee/share_gpt_90k_v1 | 90k | cc0-1.0 |
409
+ | Total | 233k | |
410
+
411
+ ## Hyperparameters
412
+ | Hyperparameter | Value |
413
+ | ----------- | ----------- |
414
+ | Total Parameters | 6.7B |
415
+ | Hidden Size | 4096 |
416
+ | Intermediate Size (MLPs) | 11008 |
417
+ | Number of Attention Heads | 32 |
418
+ | Number of Hidden Lyaers | 32 |
419
+ | RMSNorm ɛ | 1e^-6 |
420
+ | Max Seq Length | 2048 |
421
+ | Vocab Size | 32000 |
422
+
423
+ | Training Hyperparameter | Value |
424
+ | ----------- | ----------- |
425
+ | learning_rate | 2e-5 |
426
+ | num_train_epochs | 3 |
427
+ | per_device_train_batch_size | 2 |
428
+ | gradient_accumulation_steps | 16 |
429
+ | warmup_ratio | 0.04 |
430
+ | model_max_length | 2048 |
431
+
432
+
433
+ # Evaluation
434
+
435
+ | Model | MT-Bench |
436
+ |------------------------------------------------------|------------------------------------------------------------|
437
+ | LLM360/Amber 359 | 2.48750 |
438
+ | **LLM360/AmberChat** | **5.428125** |
439
+
440
+ # Citation
441
+
442
+ **BibTeX:**
443
+
444
+ ```bibtex
445
+ @article{xxx,
446
+ title={XXX},
447
+ author={XXX},
448
+ journal={XXX},
449
+ year={2023}
450
+ }
451
+ ```
452
+
453
+ <!-- original-model-card end -->