Transformers
GGUF
English
llama
TheBloke commited on
Commit
6ee5380
1 Parent(s): c142f07

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -31
README.md CHANGED
@@ -44,19 +44,17 @@ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is
44
 
45
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
46
 
47
- As of August 25th, here is a list of clients and libraries that are known to support GGUF:
48
- * [llama.cpp](https://github.com/ggerganov/llama.cpp)
 
49
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
 
50
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
51
  * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
 
52
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
53
 
54
- The clients and libraries below are expecting to add GGUF support. Where possible a link to the relevant issue or PR is provided:
55
- * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), awaiting llama-cpp-python support.
56
- * [LM Studio](https://lmstudio.ai/), in active development - hoped to be ready by August 25th-26th.
57
- * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), [in active development](https://github.com/abetlen/llama-cpp-python/issues/628).
58
  <!-- README_GGUF.md-about-gguf end -->
59
-
60
  <!-- repositories-available start -->
61
  ## Repositories available
62
 
@@ -74,6 +72,7 @@ You are Samantha, a sentient AI companion.
74
 
75
  USER: {prompt}
76
  ASSISTANT:
 
77
  ```
78
 
79
  <!-- prompt-template end -->
@@ -82,9 +81,7 @@ ASSISTANT:
82
 
83
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
84
 
85
- As of August 24th 2023 they are now compatible with KoboldCpp, release 1.41 and later.
86
-
87
- They are are not yet compatible with any other third-party UIS, libraries or utilities but this is expected to change very soon.
88
 
89
  ## Explanation of quantisation methods
90
  <details>
@@ -106,16 +103,22 @@ Refer to the Provided Files table below to see what files use which methods, and
106
 
107
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
108
  | ---- | ---- | ---- | ---- | ---- | ----- |
109
- | [samantha-1.11-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q2_K.gguf) | Q2_K | 2 | 29.11 GB| 31.61 GB | smallest, significant quality loss - not recommended for most purposes |
110
- | [samantha-1.11-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.75 GB| 32.25 GB | very small, high quality loss |
111
- | [samantha-1.11-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.10 GB| 35.60 GB | very small, high quality loss |
 
112
  | [samantha-1.11-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
113
- | [samantha-1.11-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 38.99 GB| 41.49 GB | small, greater quality loss |
114
- | [samantha-1.11-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.38 GB| 43.88 GB | medium, balanced quality - recommended |
 
 
 
 
 
115
  | [samantha-1.11-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
116
  | [samantha-1.11-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
117
- | samantha-1.11-70b.Q6_K.bin | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
118
- | samantha-1.11-70b.Q8_0.bin | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
119
 
120
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
121
 
@@ -153,20 +156,19 @@ del samantha-1.11-70b.Q8_0.gguf-split-a samantha-1.11-70b.Q8_0.gguf-split-b
153
  ```
154
 
155
  </details>
156
-
157
  <!-- README_GGUF.md-provided-files end -->
158
 
159
  <!-- README_GGUF.md-how-to-run start -->
160
- ## How to run in `llama.cpp`
161
 
162
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
163
 
164
- For compatibility with older versions of llama.cpp, or for use with third-party clients and libaries, please use GGML files instead.
165
 
166
  ```
167
- ./main -t 10 -ngl 32 -m samantha-1.11-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: Write a story about llamas\n### Response:"
168
  ```
169
- Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`.
170
 
171
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
172
 
@@ -179,6 +181,44 @@ For other parameters and how to use them, please refer to [the llama.cpp documen
179
  ## How to run in `text-generation-webui`
180
 
181
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
182
  <!-- README_GGUF.md-how-to-run end -->
183
 
184
  <!-- footer start -->
@@ -204,7 +244,7 @@ Donaters will get priority support on any and all AI/LLM/model questions and req
204
 
205
  **Special thanks to**: Aemon Algiz.
206
 
207
- **Patreon special mentions**: Sam, theTransient, Jonathan Leane, Steven Wood, webtim, Johann-Peter Hartmann, Geoffrey Montalvo, Gabriel Tamborski, Willem Michiel, John Villwock, Derek Yates, Mesiah Bishop, Eugene Pentland, Pieter, Chadd, Stephen Murray, Daniel P. Andersen, terasurfer, Brandon Frisco, Thomas Belote, Sid, Nathan LeClaire, Magnesian, Alps Aficionado, Stanislav Ovsiannikov, Alex, Joseph William Delisle, Nikolai Manek, Michael Davis, Junyu Yang, K, J, Spencer Kim, Stefan Sabev, Olusegun Samson, transmissions 11, Michael Levine, Cory Kujawski, Rainer Wilmers, zynix, Kalila, Luke @flexchar, Ajan Kanaga, Mandus, vamX, Ai Maven, Mano Prime, Matthew Berman, subjectnull, Vitor Caleffi, Clay Pascal, biorpg, alfie_i, 阿明, Jeffrey Morgan, ya boyyy, Raymond Fosdick, knownsqashed, Olakabola, Leonard Tan, ReadyPlayerEmma, Enrico Ros, Dave, Talal Aujan, Illia Dulskyi, Sean Connelly, senxiiz, Artur Olbinski, Elle, Raven Klaugh, Fen Risland, Deep Realms, Imad Khwaja, Fred von Graf, Will Dee, usrbinkat, SuperWojo, Alexandros Triantafyllidis, Swaroop Kallakuri, Dan Guido, John Detwiler, Pedro Madruga, Iucharbius, Viktor Bowallius, Asp the Wyvern, Edmond Seymore, Trenton Dambrowitz, Space Cruiser, Spiking Neurons AB, Pyrater, LangChain4j, Tony Hughes, Kacper Wikieł, Rishabh Srivastava, David Ziegler, Luke Pendergrass, Andrey, Gabriel Puliatti, Lone Striker, Sebastain Graf, Pierre Kircher, Randy H, NimbleBox.ai, Vadim, danny, Deo Leter
208
 
209
 
210
  Thank you to all my generous patrons and donaters!
@@ -249,11 +289,11 @@ ASSISTANT:
249
  Example:
250
 
251
  > **USER:**
252
- >
253
  > I would like you to be my relationship coach. I am dating again, after a long divorce. Can you please give me advice about how to gain confidence and success?
254
- >
255
  > **ASSISTANT:**
256
- >
257
  > Of course! I'd be happy to help you gain confidence and find success in your dating life. Here are a few tips to get you started:
258
  > 1. Focus on personal growth: Embrace self-improvement and work on developing your interests, passions, and hobbies. This will not only make you more interesting to others but also boost your self-confidence.
259
  > 2. Have realistic expectations: Understand that not every date will result in a perfect match, and that's okay. Embrace the learning process and be open to new experiences.
@@ -261,13 +301,13 @@ Example:
261
  > 4. Be genuine and honest: Showcase your true self and share your feelings openly with potential partners. Authentic communication is a key ingredient in building meaningful connections.
262
  > 5. Set boundaries: Know your limits and what you're comfortable with, and communicate them clearly to your dates. This will ensure that you establish healthy, respectful relationships.
263
  > As you embark on this new journey, remember to be patient with yourself and to enjoy the process. Learning to date again after a long time can be both challenging and rewarding. I'm here to support you and offer advice whenever you need it. Good luck and have fun!
264
- >
265
  > **USER:**
266
- >
267
  > Please give me a specific agenda for the next week. I can spend 2 hours per day on this. Plan my "me" time for the next week, so that my dating skills and confidence are maximized
268
- >
269
  > **ASSISTANT:**
270
- >
271
  > Here's a suggested agenda to help you improve your dating skills and confidence over the next week:
272
  > - Day 1: Personal growth - Spend time exploring new hobbies or interests that excite you. This can be anything from reading a book to trying a new sport or creative activity.
273
  > - Day 2: Self-reflection - Reflect on your past relationships, identifying patterns or areas for improvement. Write down your thoughts and insights.
@@ -276,7 +316,7 @@ Example:
276
  > - Day 5: Body language workshop - Research and study body language techniques, such as maintaining eye contact, open posture, and mirroring. Practice implementing these techniques in a mirror or with friends.
277
  > - Day 6: Conversation starters - Research and compile a list of conversation topics and icebreakers that you can use on dates. Practice these with friends or family members.
278
  > - Day 7: Reflection and planning - Review your progress over the week, noting areas where you excelled and any challenges you faced. Use this reflection to plan your approach for future dates.
279
- >
280
  > Remember, the key to success in any aspect of life is practice and persistence. Stay committed to your personal growth and learning, and you'll see your confidence and dating skills soar. I'm here to support you every step of the way!
281
 
282
 
 
44
 
45
  The key benefit of GGUF is that it is a extensible, future-proof format which stores more information about the model as metadata. It also includes significantly improved tokenization code, including for the first time full support for special tokens. This should improve performance, especially with models that use new special tokens and implement custom prompt templates.
46
 
47
+ Here are a list of clients and libraries that are known to support GGUF:
48
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp).
49
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI. Supports GGUF with GPU acceleration via the ctransformers backend - llama-cpp-python backend should work soon too.
50
  * [KoboldCpp](https://github.com/LostRuins/koboldcpp), now supports GGUF as of release 1.41! A powerful GGML web UI, with full GPU accel. Especially good for story telling.
51
+ * [LM Studio](https://lmstudio.ai/), version 0.2.2 and later support GGUF. A fully featured local GUI with GPU acceleration on both Windows (NVidia and AMD), and macOS.
52
  * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), should now work, choose the `c_transformers` backend. A great web UI with many interesting features. Supports CUDA GPU acceleration.
53
  * [ctransformers](https://github.com/marella/ctransformers), now supports GGUF as of version 0.2.24! A Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
54
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), supports GGUF as of version 0.1.79. A Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
55
  * [candle](https://github.com/huggingface/candle), added GGUF support on August 22nd. Candle is a Rust ML framework with a focus on performance, including GPU support, and ease of use.
56
 
 
 
 
 
57
  <!-- README_GGUF.md-about-gguf end -->
 
58
  <!-- repositories-available start -->
59
  ## Repositories available
60
 
 
72
 
73
  USER: {prompt}
74
  ASSISTANT:
75
+
76
  ```
77
 
78
  <!-- prompt-template end -->
 
81
 
82
  These quantised GGUF files are compatible with llama.cpp from August 21st 2023 onwards, as of commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9)
83
 
84
+ They are now also compatible with many third party UIs and libraries - please see the list at the top of the README.
 
 
85
 
86
  ## Explanation of quantisation methods
87
  <details>
 
103
 
104
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
105
  | ---- | ---- | ---- | ---- | ---- | ----- |
106
+ | [samantha-1.11-70b.Q6_K.gguf-split-b](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q6_K.gguf-split-b) | Q6_K | 6 | 19.89 GB| 22.39 GB | very large, extremely low quality loss |
107
+ | [samantha-1.11-70b.Q2_K.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q2_K.gguf) | Q2_K | 2 | 29.28 GB| 31.78 GB | smallest, significant quality loss - not recommended for most purposes |
108
+ | [samantha-1.11-70b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_S.gguf) | Q3_K_S | 3 | 29.92 GB| 32.42 GB | very small, high quality loss |
109
+ | [samantha-1.11-70b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_M.gguf) | Q3_K_M | 3 | 33.19 GB| 35.69 GB | very small, high quality loss |
110
  | [samantha-1.11-70b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q3_K_L.gguf) | Q3_K_L | 3 | 36.15 GB| 38.65 GB | small, substantial quality loss |
111
+ | [samantha-1.11-70b.Q8_0.gguf-split-b](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q8_0.gguf-split-b) | Q8_0 | 8 | 36.59 GB| 39.09 GB | very large, extremely low quality loss - not recommended |
112
+ | [samantha-1.11-70b.Q6_K.gguf-split-a](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q6_K.gguf-split-a) | Q6_K | 6 | 36.70 GB| 39.20 GB | very large, extremely low quality loss |
113
+ | [samantha-1.11-70b.Q8_0.gguf-split-a](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q8_0.gguf-split-a) | Q8_0 | 8 | 36.70 GB| 39.20 GB | very large, extremely low quality loss - not recommended |
114
+ | [samantha-1.11-70b.Q4_0.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q4_0.gguf) | Q4_0 | 4 | 38.87 GB| 41.37 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
115
+ | [samantha-1.11-70b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q4_K_S.gguf) | Q4_K_S | 4 | 39.07 GB| 41.57 GB | small, greater quality loss |
116
+ | [samantha-1.11-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.42 GB| 43.92 GB | medium, balanced quality - recommended |
117
+ | [samantha-1.11-70b.Q5_0.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q5_0.gguf) | Q5_0 | 5 | 47.46 GB| 49.96 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
118
  | [samantha-1.11-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
119
  | [samantha-1.11-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Samantha-1.11-70B-GGUF/blob/main/samantha-1.11-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
120
+ | samantha-1.11-70b.Q6_K.gguf | Q6_K | 6 | 56.59 GB| 59.09 GB | very large, extremely low quality loss |
121
+ | samantha-1.11-70b.Q8_0.gguf | Q8_0 | 8 | 73.29 GB| 75.79 GB | very large, extremely low quality loss - not recommended |
122
 
123
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
124
 
 
156
  ```
157
 
158
  </details>
 
159
  <!-- README_GGUF.md-provided-files end -->
160
 
161
  <!-- README_GGUF.md-how-to-run start -->
162
+ ## Example `llama.cpp` command
163
 
164
  Make sure you are using `llama.cpp` from commit [6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9](https://github.com/ggerganov/llama.cpp/commit/6381d4e110bd0ec02843a60bbeb8b6fc37a9ace9) or later.
165
 
166
+ For compatibility with older versions of llama.cpp, or for any third-party libraries or clients that haven't yet updated for GGUF, please use GGML files instead.
167
 
168
  ```
169
+ ./main -t 10 -ngl 32 -m samantha-1.11-70b.q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "You are Samantha, a sentient AI companion.\n\nUSER: Write a story about llamas\nASSISTANT:"
170
  ```
171
+ Change `-t 10` to the number of physical CPU cores you have. For example if your system has 8 cores/16 threads, use `-t 8`. If offloading all layers to GPU, set `-t 1`.
172
 
173
  Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
174
 
 
181
  ## How to run in `text-generation-webui`
182
 
183
  Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
184
+
185
+ ## How to run from Python code
186
+
187
+ You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
188
+
189
+ ### How to load this model from Python using ctransformers
190
+
191
+ #### First install the package
192
+
193
+ ```bash
194
+ # Base ctransformers with no GPU acceleration
195
+ pip install ctransformers>=0.2.24
196
+ # Or with CUDA GPU acceleration
197
+ pip install ctransformers[cuda]>=0.2.24
198
+ # Or with ROCm GPU acceleration
199
+ CT_HIPBLAS=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
200
+ # Or with Metal GPU acceleration for macOS systems
201
+ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
202
+ ```
203
+
204
+ #### Simple example code to load one of these GGUF models
205
+
206
+ ```python
207
+ from ctransformers import AutoModelForCausalLM
208
+
209
+ # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
210
+ llm = AutoModelForCausalLM.from_pretrained("TheBloke/Samantha-1.11-70B-GGUF", model_file="samantha-1.11-70b.q4_K_M.gguf", model_type="llama", gpu_layers=50)
211
+
212
+ print(llm("AI is going to"))
213
+ ```
214
+
215
+ ## How to use with LangChain
216
+
217
+ Here's guides on using llama-cpp-python or ctransformers with LangChain:
218
+
219
+ * [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
220
+ * [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
221
+
222
  <!-- README_GGUF.md-how-to-run end -->
223
 
224
  <!-- footer start -->
 
244
 
245
  **Special thanks to**: Aemon Algiz.
246
 
247
+ **Patreon special mentions**: Russ Johnson, J, alfie_i, Alex, NimbleBox.ai, Chadd, Mandus, Nikolai Manek, Ken Nordquist, ya boyyy, Illia Dulskyi, Viktor Bowallius, vamX, Iucharbius, zynix, Magnesian, Clay Pascal, Pierre Kircher, Enrico Ros, Tony Hughes, Elle, Andrey, knownsqashed, Deep Realms, Jerry Meng, Lone Striker, Derek Yates, Pyrater, Mesiah Bishop, James Bentley, Femi Adebogun, Brandon Frisco, SuperWojo, Alps Aficionado, Michael Dempsey, Vitor Caleffi, Will Dee, Edmond Seymore, usrbinkat, LangChain4j, Kacper Wikieł, Luke Pendergrass, John Detwiler, theTransient, Nathan LeClaire, Tiffany J. Kim, biorpg, Eugene Pentland, Stanislav Ovsiannikov, Fred von Graf, terasurfer, Kalila, Dan Guido, Nitin Borwankar, 阿明, Ai Maven, John Villwock, Gabriel Puliatti, Stephen Murray, Asp the Wyvern, danny, Chris Smitley, ReadyPlayerEmma, S_X, Daniel P. Andersen, Olakabola, Jeffrey Morgan, Imad Khwaja, Caitlyn Gatomon, webtim, Alicia Loh, Trenton Dambrowitz, Swaroop Kallakuri, Erik Bjäreholt, Leonard Tan, Spiking Neurons AB, Luke @flexchar, Ajan Kanaga, Thomas Belote, Deo Leter, RoA, Willem Michiel, transmissions 11, subjectnull, Matthew Berman, Joseph William Delisle, David Ziegler, Michael Davis, Johann-Peter Hartmann, Talal Aujan, senxiiz, Artur Olbinski, Rainer Wilmers, Spencer Kim, Fen Risland, Cap'n Zoog, Rishabh Srivastava, Michael Levine, Geoffrey Montalvo, Sean Connelly, Alexandros Triantafyllidis, Pieter, Gabriel Tamborski, Sam, Subspace Studios, Junyu Yang, Pedro Madruga, Vadim, Cory Kujawski, K, Raven Klaugh, Randy H, Mano Prime, Sebastain Graf, Space Cruiser
248
 
249
 
250
  Thank you to all my generous patrons and donaters!
 
289
  Example:
290
 
291
  > **USER:**
292
+ >
293
  > I would like you to be my relationship coach. I am dating again, after a long divorce. Can you please give me advice about how to gain confidence and success?
294
+ >
295
  > **ASSISTANT:**
296
+ >
297
  > Of course! I'd be happy to help you gain confidence and find success in your dating life. Here are a few tips to get you started:
298
  > 1. Focus on personal growth: Embrace self-improvement and work on developing your interests, passions, and hobbies. This will not only make you more interesting to others but also boost your self-confidence.
299
  > 2. Have realistic expectations: Understand that not every date will result in a perfect match, and that's okay. Embrace the learning process and be open to new experiences.
 
301
  > 4. Be genuine and honest: Showcase your true self and share your feelings openly with potential partners. Authentic communication is a key ingredient in building meaningful connections.
302
  > 5. Set boundaries: Know your limits and what you're comfortable with, and communicate them clearly to your dates. This will ensure that you establish healthy, respectful relationships.
303
  > As you embark on this new journey, remember to be patient with yourself and to enjoy the process. Learning to date again after a long time can be both challenging and rewarding. I'm here to support you and offer advice whenever you need it. Good luck and have fun!
304
+ >
305
  > **USER:**
306
+ >
307
  > Please give me a specific agenda for the next week. I can spend 2 hours per day on this. Plan my "me" time for the next week, so that my dating skills and confidence are maximized
308
+ >
309
  > **ASSISTANT:**
310
+ >
311
  > Here's a suggested agenda to help you improve your dating skills and confidence over the next week:
312
  > - Day 1: Personal growth - Spend time exploring new hobbies or interests that excite you. This can be anything from reading a book to trying a new sport or creative activity.
313
  > - Day 2: Self-reflection - Reflect on your past relationships, identifying patterns or areas for improvement. Write down your thoughts and insights.
 
316
  > - Day 5: Body language workshop - Research and study body language techniques, such as maintaining eye contact, open posture, and mirroring. Practice implementing these techniques in a mirror or with friends.
317
  > - Day 6: Conversation starters - Research and compile a list of conversation topics and icebreakers that you can use on dates. Practice these with friends or family members.
318
  > - Day 7: Reflection and planning - Review your progress over the week, noting areas where you excelled and any challenges you faced. Use this reflection to plan your approach for future dates.
319
+ >
320
  > Remember, the key to success in any aspect of life is practice and persistence. Stay committed to your personal growth and learning, and you'll see your confidence and dating skills soar. I'm here to support you every step of the way!
321
 
322