TheLastBen commited on
Commit
56fe89d
1 Parent(s): b4a38c5

Rename Notebooks/b to Notebooks/Fast-Dreambooth-v1.5.ipynb

Browse files
Files changed (2) hide show
  1. Notebooks/Fast-Dreambooth-v1.5.ipynb +400 -0
  2. Notebooks/b +0 -0
Notebooks/Fast-Dreambooth-v1.5.ipynb ADDED
@@ -0,0 +1,400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "494d5ce4-5843-4d70-ae96-c1983e21b6e8",
6
+ "metadata": {},
7
+ "source": [
8
+ "## Dreambooth v1.5 Paperspace Notebook From https://github.com/TheLastBen/fast-stable-diffusion, if you encounter any issues, feel free to discuss them. [Support](https://ko-fi.com/thelastben)"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "markdown",
13
+ "id": "8afdca63-eff3-4a9d-b4d9-127c0f028033",
14
+ "metadata": {
15
+ "tags": []
16
+ },
17
+ "source": [
18
+ "# Dependencies"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": null,
24
+ "id": "be74b2d5-da96-4bf4-ae82-4fe4b8abc04c",
25
+ "metadata": {
26
+ "tags": []
27
+ },
28
+ "outputs": [],
29
+ "source": [
30
+ "# Install the dependencies\n",
31
+ "\n",
32
+ "force_reinstall= False\n",
33
+ "\n",
34
+ "# Set to true only if you want to install the dependencies again.\n",
35
+ "\n",
36
+ "\n",
37
+ "#--------------------\n",
38
+ "with open('/dev/null', 'w') as devnull:import requests, os, time, importlib;open('/notebooks/mainpaperspacev1.py', 'wb').write(requests.get('https://huggingface.co/datasets/TheLastBen/PPS/raw/main/Scripts/mainpaperspacev1.py').content); os.chdir('/notebooks');time.sleep(3);import mainpaperspacev1;importlib.reload(mainpaperspacev1);from mainpaperspacev1 import *;Deps(force_reinstall)"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "markdown",
43
+ "id": "7a4ef4a2-6863-4603-9254-a1e2a547ee38",
44
+ "metadata": {
45
+ "tags": []
46
+ },
47
+ "source": [
48
+ "# Download the model"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": null,
54
+ "id": "a1ba734e-515b-4761-8c88-ef7f165d7971",
55
+ "metadata": {
56
+ "tags": []
57
+ },
58
+ "outputs": [],
59
+ "source": [
60
+ "#Leave everything EMPTY to use the original model\n",
61
+ "\n",
62
+ "Path_to_HuggingFace= \"\"\n",
63
+ "\n",
64
+ "# Load and finetune a model from Hugging Face, use the format \"profile/model\" like : runwayml/stable-diffusion-v1-5\n",
65
+ "\n",
66
+ "\n",
67
+ "CKPT_Path = \"\"\n",
68
+ "\n",
69
+ "# Load a CKPT model from the storage.\n",
70
+ "\n",
71
+ "\n",
72
+ "CKPT_Link = \"\"\n",
73
+ "\n",
74
+ "# A CKPT direct link, huggingface CKPT link or a shared CKPT from gdrive.\n",
75
+ "\n",
76
+ "\n",
77
+ "#----------------\n",
78
+ "MODEL_NAME=dl(Path_to_HuggingFace, CKPT_Path, CKPT_Link)"
79
+ ]
80
+ },
81
+ {
82
+ "cell_type": "markdown",
83
+ "id": "4c6c4932-e614-4f5e-8d4a-4feca5ce54f5",
84
+ "metadata": {},
85
+ "source": [
86
+ "# Create/Load a Session"
87
+ ]
88
+ },
89
+ {
90
+ "cell_type": "code",
91
+ "execution_count": null,
92
+ "id": "b6595c37-8ad2-45ff-a055-fe58c6663d2f",
93
+ "metadata": {
94
+ "tags": []
95
+ },
96
+ "outputs": [],
97
+ "source": [
98
+ "Session_Name = \"\"\n",
99
+ "\n",
100
+ "# Enter the session name, it if it exists, it will load it, otherwise it'll create an new session.\n",
101
+ "\n",
102
+ "\n",
103
+ "Session_Link_optional = \"\"\n",
104
+ "\n",
105
+ "# Import a session from another gdrive, the shared gdrive link must point to the specific session's folder that contains the trained CKPT, remove any intermediary CKPT if any.\n",
106
+ "\n",
107
+ "\n",
108
+ "#-----------------\n",
109
+ "[PT, WORKSPACE, Session_Name, INSTANCE_NAME, OUTPUT_DIR, SESSION_DIR, CONCEPT_DIR, INSTANCE_DIR, CAPTIONS_DIR, MDLPTH, MODEL_NAME, resume]=sess(Session_Name, Session_Link_optional, MODEL_NAME if 'MODEL_NAME' in locals() else \"\")"
110
+ ]
111
+ },
112
+ {
113
+ "cell_type": "markdown",
114
+ "id": "5698de61-08d3-4d90-83ef-f882ed956d01",
115
+ "metadata": {},
116
+ "source": [
117
+ "# Instance Images"
118
+ ]
119
+ },
120
+ {
121
+ "cell_type": "code",
122
+ "execution_count": null,
123
+ "id": "bc2f8f28-226e-45b8-8257-804bbb711f56",
124
+ "metadata": {
125
+ "tags": []
126
+ },
127
+ "outputs": [],
128
+ "source": [
129
+ "Remove_existing_instance_images= True\n",
130
+ "\n",
131
+ "# Set to False to keep the existing instance images if any.\n",
132
+ "\n",
133
+ "\n",
134
+ "IMAGES_FOLDER_OPTIONAL=\"\"\n",
135
+ "\n",
136
+ "# If you prefer to specify directly the folder of the pictures instead of uploading, this will add the pictures to the existing (if any) instance images. Leave EMPTY to upload.\n",
137
+ "\n",
138
+ "\n",
139
+ "Smart_crop_images= True\n",
140
+ "\n",
141
+ "# Automatically crop your input images.\n",
142
+ "\n",
143
+ "\n",
144
+ "Crop_size = 512\n",
145
+ "\n",
146
+ "# Choices: \"512\", \"576\", \"640\", \"704\", \"768\", \"832\", \"896\", \"960\", \"1024\"\n",
147
+ "\n",
148
+ "# Check out this example for naming : https://i.imgur.com/d2lD3rz.jpeg\n",
149
+ "\n",
150
+ "\n",
151
+ "#-----------------\n",
152
+ "uplder(Remove_existing_instance_images, Smart_crop_images, Crop_size, IMAGES_FOLDER_OPTIONAL, INSTANCE_DIR, CAPTIONS_DIR, False)"
153
+ ]
154
+ },
155
+ {
156
+ "cell_type": "markdown",
157
+ "id": "0e93924f-a6bf-45d5-aa77-915ad7385dcd",
158
+ "metadata": {},
159
+ "source": [
160
+ "# Manual Captioning"
161
+ ]
162
+ },
163
+ {
164
+ "cell_type": "code",
165
+ "execution_count": null,
166
+ "id": "c5dbcb29-b42f-4cfc-9be8-83355838d5a2",
167
+ "metadata": {
168
+ "tags": []
169
+ },
170
+ "outputs": [],
171
+ "source": [
172
+ "# Open a tool to manually caption the instance images.\n",
173
+ "\n",
174
+ "#-----------------\n",
175
+ "caption(CAPTIONS_DIR, INSTANCE_DIR)"
176
+ ]
177
+ },
178
+ {
179
+ "cell_type": "markdown",
180
+ "id": "c90140c1-6c91-4cae-a222-e1a746957f95",
181
+ "metadata": {},
182
+ "source": [
183
+ "# Concept Images"
184
+ ]
185
+ },
186
+ {
187
+ "cell_type": "code",
188
+ "execution_count": null,
189
+ "id": "55c27688-8601-4943-b61d-fc48b9ded067",
190
+ "metadata": {},
191
+ "outputs": [],
192
+ "source": [
193
+ "Remove_existing_concept_images= True\n",
194
+ "\n",
195
+ "# Set to False to keep the existing concept images if any.\n",
196
+ "\n",
197
+ "\n",
198
+ "IMAGES_FOLDER_OPTIONAL=\"\"\n",
199
+ "\n",
200
+ "# If you prefer to specify directly the folder of the pictures instead of uploading, this will add the pictures to the existing (if any) concept images. Leave EMPTY to upload.\n",
201
+ "\n",
202
+ "\n",
203
+ "#-----------------\n",
204
+ "uplder(Remove_existing_concept_images, True, 512, IMAGES_FOLDER_OPTIONAL, CONCEPT_DIR, CAPTIONS_DIR, True)"
205
+ ]
206
+ },
207
+ {
208
+ "cell_type": "markdown",
209
+ "id": "2a4aa42a-fd68-41ad-9ba7-da99f834e2c1",
210
+ "metadata": {},
211
+ "source": [
212
+ "# Dreambooth"
213
+ ]
214
+ },
215
+ {
216
+ "cell_type": "code",
217
+ "execution_count": null,
218
+ "id": "612d8335-b984-4f34-911d-5457ff98e507",
219
+ "metadata": {},
220
+ "outputs": [],
221
+ "source": [
222
+ "Resume_Training = False\n",
223
+ "\n",
224
+ "# If you're not satisfied with the result, Set to True, run again the cell and it will continue training the current model.\n",
225
+ "\n",
226
+ "\n",
227
+ "UNet_Training_Steps=1500\n",
228
+ "\n",
229
+ "UNet_Learning_Rate = \"4e-6\"\n",
230
+ "\n",
231
+ "# If you use 10 images, use 1500 steps, if you're not satisfied with the result, resume training for another 200 steps, and so on ...\n",
232
+ "\n",
233
+ "\n",
234
+ "Text_Encoder_Training_Steps=300\n",
235
+ "\n",
236
+ "Text_Encoder_Learning_Rate= \"1e-6\"\n",
237
+ "\n",
238
+ "# 350-600 steps is enough for a small dataset, keep this number small to avoid overfitting, set to 0 to disable, set it to 0 before resuming training if it is already trained.\n",
239
+ "\n",
240
+ "\n",
241
+ "Text_Encoder_Concept_Training_Steps=0\n",
242
+ "\n",
243
+ "# Suitable for training a style/concept as it acts as regularization, with a minimum of 300 steps, 1 step/image is enough to train the concept(s), set to 0 to disable, set both the settings above to 0 to fintune only the text_encoder on the concept, set it to 0 before resuming training if it is already trained.\n",
244
+ "\n",
245
+ "\n",
246
+ "External_Captions= False\n",
247
+ "\n",
248
+ "# Get the captions from a text file for each instance image.\n",
249
+ "\n",
250
+ "\n",
251
+ "Style_Training=False\n",
252
+ "\n",
253
+ "# Further reduce overfitting, suitable when training a style or a general theme, don't check the box at the beginning, check it after training for at least 800 steps. (Has no effect when using External Captions)\n",
254
+ "\n",
255
+ "\n",
256
+ "Resolution = 512\n",
257
+ "\n",
258
+ "# Choices : \"512\", \"576\", \"640\", \"704\", \"768\", \"832\", \"896\", \"960\", \"1024\"\n",
259
+ "# Higher resolution = Higher quality, make sure the instance images are cropped to this selected size (or larger).\n",
260
+ "\n",
261
+ "#---------------------------------------------------------------\n",
262
+ "\n",
263
+ "Save_Checkpoint_Every_n_Steps = False\n",
264
+ "\n",
265
+ "Save_Checkpoint_Every=500\n",
266
+ "\n",
267
+ "# Minimum 200 steps between each save.\n",
268
+ "\n",
269
+ "\n",
270
+ "Start_saving_from_the_step=500\n",
271
+ "\n",
272
+ "# Start saving intermediary checkpoints from this step.\n",
273
+ "\n",
274
+ "\n",
275
+ "#-----------------\n",
276
+ "resume=dbtrain(Resume_Training, UNet_Training_Steps, UNet_Learning_Rate, Text_Encoder_Training_Steps, Text_Encoder_Concept_Training_Steps, Text_Encoder_Learning_Rate, Style_Training, Resolution, MODEL_NAME, SESSION_DIR, INSTANCE_DIR, CONCEPT_DIR, CAPTIONS_DIR, External_Captions, INSTANCE_NAME, Session_Name, OUTPUT_DIR, PT, resume, Save_Checkpoint_Every_n_Steps, Start_saving_from_the_step, Save_Checkpoint_Every)"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "markdown",
281
+ "id": "bf6f2232-60b3-41c5-bea6-b0dcc4aef937",
282
+ "metadata": {},
283
+ "source": [
284
+ "# Test the Trained Model"
285
+ ]
286
+ },
287
+ {
288
+ "cell_type": "code",
289
+ "execution_count": null,
290
+ "id": "1263a084-b142-4e63-a0aa-2706673a4355",
291
+ "metadata": {},
292
+ "outputs": [],
293
+ "source": [
294
+ "Previous_Session_Name=\"\"\n",
295
+ "\n",
296
+ "# Leave empty if you want to use the current trained model.\n",
297
+ "\n",
298
+ "\n",
299
+ "Custom_Path = \"\"\n",
300
+ "\n",
301
+ "# Input the full path to a desired model.\n",
302
+ "\n",
303
+ "\n",
304
+ "User = \"\"\n",
305
+ "\n",
306
+ "Password= \"\"\n",
307
+ "\n",
308
+ "# Add credentials to your Gradio interface (optional).\n",
309
+ "\n",
310
+ "\n",
311
+ "Use_localtunnel = False\n",
312
+ "\n",
313
+ "# If you have trouble using Gradio server, use this one.\n",
314
+ "\n",
315
+ "\n",
316
+ "#-----------------\n",
317
+ "configf=test(Custom_Path, Previous_Session_Name, Session_Name, User, Password, Use_localtunnel) if 'Session_Name' in locals() else test(Custom_Path, Previous_Session_Name, \"\", User, Password, Use_localtunnel)\n",
318
+ "!python /notebooks/sd/stable-diffusion-webui/webui.py $configf"
319
+ ]
320
+ },
321
+ {
322
+ "cell_type": "markdown",
323
+ "id": "53ccbcaf-3319-44f5-967b-ecbdfa9d0e78",
324
+ "metadata": {},
325
+ "source": [
326
+ "# Upload The Trained Model to Hugging Face"
327
+ ]
328
+ },
329
+ {
330
+ "cell_type": "code",
331
+ "execution_count": null,
332
+ "id": "2c9cb205-d828-4e51-9943-f337bd410ea8",
333
+ "metadata": {},
334
+ "outputs": [],
335
+ "source": [
336
+ "#Save it to your personal profile or collaborate to the public [library of concepts](https://huggingface.co/sd-dreambooth-library)\n",
337
+ "\n",
338
+ "Name_of_your_concept = \"\"\n",
339
+ "\n",
340
+ "# Leave empty if you want to name your concept the same as the current session.\n",
341
+ "\n",
342
+ "\n",
343
+ "Save_concept_to = \"My_Profile\"\n",
344
+ "\n",
345
+ "# Choices : \"Public_Library\", \"My_Profile\".\n",
346
+ "\n",
347
+ "\n",
348
+ "hf_token_write = \"\"\n",
349
+ "\n",
350
+ "# Create a write access token here : https://huggingface.co/settings/tokens, go to \"New token\" -> Role : Write, a regular read token won't work here.\n",
351
+ "\n",
352
+ "\n",
353
+ "#---------------------------------\n",
354
+ "hf(Name_of_your_concept, Save_concept_to, hf_token_write, INSTANCE_NAME, OUTPUT_DIR, Session_Name, MDLPTH)"
355
+ ]
356
+ },
357
+ {
358
+ "cell_type": "markdown",
359
+ "id": "881d80a3-4ebf-41bc-b68f-ac1cacb080f3",
360
+ "metadata": {},
361
+ "source": [
362
+ "# Free up space"
363
+ ]
364
+ },
365
+ {
366
+ "cell_type": "code",
367
+ "execution_count": null,
368
+ "id": "7403744d-cc45-419f-88ac-5475fa0f7f45",
369
+ "metadata": {},
370
+ "outputs": [],
371
+ "source": [
372
+ "# Display a list of sessions from which you can remove any session you don't need anymore\n",
373
+ "\n",
374
+ "#-------------------------\n",
375
+ "clean()"
376
+ ]
377
+ }
378
+ ],
379
+ "metadata": {
380
+ "kernelspec": {
381
+ "display_name": "Python 3 (ipykernel)",
382
+ "language": "python",
383
+ "name": "python3"
384
+ },
385
+ "language_info": {
386
+ "codemirror_mode": {
387
+ "name": "ipython",
388
+ "version": 3
389
+ },
390
+ "file_extension": ".py",
391
+ "mimetype": "text/x-python",
392
+ "name": "python",
393
+ "nbconvert_exporter": "python",
394
+ "pygments_lexer": "ipython3",
395
+ "version": "3.9.13"
396
+ }
397
+ },
398
+ "nbformat": 4,
399
+ "nbformat_minor": 5
400
+ }
Notebooks/b DELETED
File without changes