TheLastBen commited on
Commit
19314ac
1 Parent(s): 56fe89d

Create Fast-Dreambooth-v2.ipynb

Browse files
Files changed (1) hide show
  1. Notebooks/Fast-Dreambooth-v2.ipynb +409 -0
Notebooks/Fast-Dreambooth-v2.ipynb ADDED
@@ -0,0 +1,409 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "id": "494d5ce4-5843-4d70-ae96-c1983e21b6e8",
6
+ "metadata": {},
7
+ "source": [
8
+ "## Dreambooth v2 Paperspace Notebook From https://github.com/TheLastBen/fast-stable-diffusion, if you encounter any issues, feel free to discuss them. [Support](https://ko-fi.com/thelastben)"
9
+ ]
10
+ },
11
+ {
12
+ "cell_type": "markdown",
13
+ "id": "8afdca63-eff3-4a9d-b4d9-127c0f028033",
14
+ "metadata": {
15
+ "tags": []
16
+ },
17
+ "source": [
18
+ "# Dependencies"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": null,
24
+ "id": "be74b2d5-da96-4bf4-ae82-4fe4b8abc04c",
25
+ "metadata": {
26
+ "tags": []
27
+ },
28
+ "outputs": [],
29
+ "source": [
30
+ "# Install the dependencies\n",
31
+ "\n",
32
+ "force_reinstall= False\n",
33
+ "\n",
34
+ "# Set to true only if you want to install the dependencies again.\n",
35
+ "\n",
36
+ "\n",
37
+ "#--------------------\n",
38
+ "with open('/dev/null', 'w') as devnull:import requests, os, time, importlib;open('/notebooks/mainpaperspacev2.py', 'wb').write(requests.get('https://huggingface.co/datasets/TheLastBen/PPS/raw/main/Scripts/mainpaperspacev2.py').content);os.chdir('/notebooks');time.sleep(3);import mainpaperspacev2;importlib.reload(mainpaperspacev2);from mainpaperspacev2 import *;Deps(force_reinstall)"
39
+ ]
40
+ },
41
+ {
42
+ "cell_type": "markdown",
43
+ "id": "7a4ef4a2-6863-4603-9254-a1e2a547ee38",
44
+ "metadata": {
45
+ "tags": []
46
+ },
47
+ "source": [
48
+ "# Download the model"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": null,
54
+ "id": "a1ba734e-515b-4761-8c88-ef7f165d7971",
55
+ "metadata": {
56
+ "tags": []
57
+ },
58
+ "outputs": [],
59
+ "source": [
60
+ "Model_Version = \"768\"\n",
61
+ "\n",
62
+ "# Choices are : \"512\", \"768\"\n",
63
+ "\n",
64
+ "#-----------------------------------------------------------------------------------------------------------------------------------\n",
65
+ "\n",
66
+ "Custom_Model_Version = \"768\"\n",
67
+ "\n",
68
+ "# Choices are : \"512\", \"768\"\n",
69
+ "\n",
70
+ "Path_to_HuggingFace= \"\"\n",
71
+ "\n",
72
+ "# Load and finetune a model from Hugging Face, use the format \"profile/model\" like : runwayml/stable-diffusion-v1-5.\n",
73
+ "\n",
74
+ "CKPT_Path = \"\"\n",
75
+ "\n",
76
+ "# Load a CKPT model from the storage.\n",
77
+ "\n",
78
+ "CKPT_Link = \"\"\n",
79
+ "\n",
80
+ "# A CKPT direct link, huggingface CKPT link or a shared CKPT from gdrive.\n",
81
+ "\n",
82
+ "\n",
83
+ "#-------------\n",
84
+ "MODEL_NAMEv2=dlv2(Path_to_HuggingFace, CKPT_Path, CKPT_Link, Model_Version, Custom_Model_Version)"
85
+ ]
86
+ },
87
+ {
88
+ "cell_type": "markdown",
89
+ "id": "4c6c4932-e614-4f5e-8d4a-4feca5ce54f5",
90
+ "metadata": {},
91
+ "source": [
92
+ "# Create/Load a Session"
93
+ ]
94
+ },
95
+ {
96
+ "cell_type": "code",
97
+ "execution_count": null,
98
+ "id": "b6595c37-8ad2-45ff-a055-fe58c6663d2f",
99
+ "metadata": {
100
+ "tags": []
101
+ },
102
+ "outputs": [],
103
+ "source": [
104
+ "Session_Name = \"\"\n",
105
+ "\n",
106
+ "# Enter the session name, it if it exists, it will load it, otherwise it'll create an new session.\n",
107
+ "\n",
108
+ "Session_Link_optional = \"\"\n",
109
+ "\n",
110
+ "# Import a session from another gdrive, the shared gdrive link must point to the specific session's folder that contains the trained CKPT, remove any intermediary CKPT if any.\n",
111
+ "\n",
112
+ "Model_Version = \"768\"\n",
113
+ "\n",
114
+ "# Ignore this if you're not loading a previous session that contains a trained model, choices are : \"512\", \"768\"\n",
115
+ "\n",
116
+ "\n",
117
+ "#-----------------\n",
118
+ "[PT, WORKSPACE, Session_Name, INSTANCE_NAME, OUTPUT_DIR, SESSION_DIR, CONCEPT_DIR, INSTANCE_DIR, CAPTIONS_DIR, MDLPTH, MODEL_NAMEv2, resumev2]=sessv2(Session_Name, Session_Link_optional, Model_Version, MODEL_NAMEv2 if 'MODEL_NAMEv2' in locals() else \"\")"
119
+ ]
120
+ },
121
+ {
122
+ "cell_type": "markdown",
123
+ "id": "5698de61-08d3-4d90-83ef-f882ed956d01",
124
+ "metadata": {},
125
+ "source": [
126
+ "# Instance Images"
127
+ ]
128
+ },
129
+ {
130
+ "cell_type": "code",
131
+ "execution_count": null,
132
+ "id": "bc2f8f28-226e-45b8-8257-804bbb711f56",
133
+ "metadata": {
134
+ "tags": []
135
+ },
136
+ "outputs": [],
137
+ "source": [
138
+ "Remove_existing_instance_images= True\n",
139
+ "\n",
140
+ "# Set to False to keep the existing instance images if any.\n",
141
+ "\n",
142
+ "\n",
143
+ "IMAGES_FOLDER_OPTIONAL=\"\"\n",
144
+ "\n",
145
+ "# If you prefer to specify directly the folder of the pictures instead of uploading, this will add the pictures to the existing (if any) instance images. Leave EMPTY to upload.\n",
146
+ "\n",
147
+ "\n",
148
+ "Smart_crop_images= True\n",
149
+ "\n",
150
+ "# Automatically crop your input images.\n",
151
+ "\n",
152
+ "\n",
153
+ "Crop_size = 768\n",
154
+ "\n",
155
+ "# Choices: \"512\", \"576\", \"640\", \"704\", \"768\", \"832\", \"896\", \"960\", \"1024\"\n",
156
+ "\n",
157
+ "# Check out this example for naming : https://i.imgur.com/d2lD3rz.jpeg\n",
158
+ "\n",
159
+ "\n",
160
+ "#-----------------\n",
161
+ "uplder(Remove_existing_instance_images, Smart_crop_images, Crop_size, IMAGES_FOLDER_OPTIONAL, INSTANCE_DIR, CAPTIONS_DIR, False)"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "markdown",
166
+ "id": "0e93924f-a6bf-45d5-aa77-915ad7385dcd",
167
+ "metadata": {},
168
+ "source": [
169
+ "# Manual Captioning"
170
+ ]
171
+ },
172
+ {
173
+ "cell_type": "code",
174
+ "execution_count": null,
175
+ "id": "c5dbcb29-b42f-4cfc-9be8-83355838d5a2",
176
+ "metadata": {
177
+ "tags": []
178
+ },
179
+ "outputs": [],
180
+ "source": [
181
+ "# Open a tool to manually caption the instance images.\n",
182
+ "\n",
183
+ "#-----------------\n",
184
+ "caption(CAPTIONS_DIR, INSTANCE_DIR)"
185
+ ]
186
+ },
187
+ {
188
+ "cell_type": "markdown",
189
+ "id": "c90140c1-6c91-4cae-a222-e1a746957f95",
190
+ "metadata": {},
191
+ "source": [
192
+ "# Concept Images"
193
+ ]
194
+ },
195
+ {
196
+ "cell_type": "code",
197
+ "execution_count": null,
198
+ "id": "55c27688-8601-4943-b61d-fc48b9ded067",
199
+ "metadata": {},
200
+ "outputs": [],
201
+ "source": [
202
+ "Remove_existing_concept_images= True\n",
203
+ "\n",
204
+ "# Set to False to keep the existing concept images if any.\n",
205
+ "\n",
206
+ "\n",
207
+ "IMAGES_FOLDER_OPTIONAL=\"\"\n",
208
+ "\n",
209
+ "# If you prefer to specify directly the folder of the pictures instead of uploading, this will add the pictures to the existing (if any) concept images. Leave EMPTY to upload.\n",
210
+ "\n",
211
+ "\n",
212
+ "#-----------------\n",
213
+ "uplder(Remove_existing_concept_images, True, 512, IMAGES_FOLDER_OPTIONAL, CONCEPT_DIR, CAPTIONS_DIR, True)"
214
+ ]
215
+ },
216
+ {
217
+ "cell_type": "markdown",
218
+ "id": "2a4aa42a-fd68-41ad-9ba7-da99f834e2c1",
219
+ "metadata": {},
220
+ "source": [
221
+ "# Dreambooth"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": null,
227
+ "id": "612d8335-b984-4f34-911d-5457ff98e507",
228
+ "metadata": {},
229
+ "outputs": [],
230
+ "source": [
231
+ "Resume_Training = False\n",
232
+ "\n",
233
+ "# If you're not satisfied with the result, Set to True, run again the cell and it will continue training the current model.\n",
234
+ "\n",
235
+ "\n",
236
+ "UNet_Training_Steps=850\n",
237
+ "\n",
238
+ "UNet_Learning_Rate = \"6e-6\"\n",
239
+ "\n",
240
+ "# If you use 10 images, use 650 steps, if you're not satisfied with the result, resume training for another 200 steps with a lower learning rate (8e-6), and so on ...\n",
241
+ "\n",
242
+ "\n",
243
+ "Text_Encoder_Training_Steps=300\n",
244
+ "\n",
245
+ "Text_Encoder_Learning_Rate= \"1e-6\"\n",
246
+ "\n",
247
+ "# 350-600 steps is enough for a small dataset, keep this number small to avoid overfitting, set to 0 to disable, set it to 0 before resuming training if it is already trained.\n",
248
+ "\n",
249
+ "\n",
250
+ "Text_Encoder_Concept_Training_Steps=0\n",
251
+ "\n",
252
+ "# Suitable for training a style/concept as it acts as regularization, with a minimum of 300 steps, 1 step/image is enough to train the concept(s), set to 0 to disable, set both the settings above to 0 to fintune only the text_encoder on the concept, set it to 0 before resuming training if it is already trained.\n",
253
+ "\n",
254
+ "\n",
255
+ "External_Captions= False\n",
256
+ "\n",
257
+ "# Get the captions from a text file for each instance image.\n",
258
+ "\n",
259
+ "\n",
260
+ "Style_Training=False\n",
261
+ "\n",
262
+ "# Further reduce overfitting, suitable when training a style or a general theme, don't check the box at the beginning, check it after training for at least 800 steps. (Has no effect when using External Captions)\n",
263
+ "\n",
264
+ "\n",
265
+ "Resolution = 768\n",
266
+ "\n",
267
+ "# Choices : \"512\", \"576\", \"640\", \"704\", \"768\", \"832\", \"896\", \"960\", \"1024\"\n",
268
+ "# Higher resolution = Higher quality, make sure the instance images are cropped to this selected size (or larger).\n",
269
+ "\n",
270
+ "#---------------------------------------------------------------\n",
271
+ "\n",
272
+ "Save_Checkpoint_Every_n_Steps = False\n",
273
+ "\n",
274
+ "Save_Checkpoint_Every=500\n",
275
+ "\n",
276
+ "# Minimum 200 steps between each save.\n",
277
+ "\n",
278
+ "\n",
279
+ "Start_saving_from_the_step=500\n",
280
+ "\n",
281
+ "# Start saving intermediary checkpoints from this step.\n",
282
+ "\n",
283
+ "\n",
284
+ "#-----------------\n",
285
+ "resumev2=dbtrainv2(Resume_Training, UNet_Training_Steps, UNet_Learning_Rate, Text_Encoder_Training_Steps, Text_Encoder_Concept_Training_Steps, Text_Encoder_Learning_Rate, Style_Training, Resolution, MODEL_NAMEv2, SESSION_DIR, INSTANCE_DIR, CONCEPT_DIR, CAPTIONS_DIR, External_Captions, INSTANCE_NAME, Session_Name, OUTPUT_DIR, PT, resumev2, Save_Checkpoint_Every_n_Steps, Start_saving_from_the_step, Save_Checkpoint_Every)"
286
+ ]
287
+ },
288
+ {
289
+ "cell_type": "markdown",
290
+ "id": "bf6f2232-60b3-41c5-bea6-b0dcc4aef937",
291
+ "metadata": {},
292
+ "source": [
293
+ "# Test the Trained Model"
294
+ ]
295
+ },
296
+ {
297
+ "cell_type": "code",
298
+ "execution_count": null,
299
+ "id": "1263a084-b142-4e63-a0aa-2706673a4355",
300
+ "metadata": {},
301
+ "outputs": [],
302
+ "source": [
303
+ "Previous_Session_Name=\"\"\n",
304
+ "\n",
305
+ "# Leave empty if you want to use the current trained model.\n",
306
+ "\n",
307
+ "\n",
308
+ "Custom_Path = \"\"\n",
309
+ "\n",
310
+ "# Input the full path to a desired model.\n",
311
+ "\n",
312
+ "\n",
313
+ "User = \"\" \n",
314
+ "\n",
315
+ "Password= \"\"\n",
316
+ "\n",
317
+ "# Add credentials to your Gradio interface (optional).\n",
318
+ "\n",
319
+ "\n",
320
+ "Use_localtunnel = False\n",
321
+ "\n",
322
+ "# If you have trouble using Gradio server, use this one.\n",
323
+ "\n",
324
+ "\n",
325
+ "#-----------------\n",
326
+ "configf=test(Custom_Path, Previous_Session_Name, Session_Name, User, Password, Use_localtunnel) if 'Session_Name' in locals() else test(Custom_Path, Previous_Session_Name, \"\", User, Password, Use_localtunnel)\n",
327
+ "!python /notebooks/sd/stable-diffusion-webui/webui.py $configf"
328
+ ]
329
+ },
330
+ {
331
+ "cell_type": "markdown",
332
+ "id": "53ccbcaf-3319-44f5-967b-ecbdfa9d0e78",
333
+ "metadata": {},
334
+ "source": [
335
+ "# Upload The Trained Model to Hugging Face"
336
+ ]
337
+ },
338
+ {
339
+ "cell_type": "code",
340
+ "execution_count": null,
341
+ "id": "2c9cb205-d828-4e51-9943-f337bd410ea8",
342
+ "metadata": {},
343
+ "outputs": [],
344
+ "source": [
345
+ "#Save it to your personal profile or collaborate to the public [library of concepts](https://huggingface.co/sd-dreambooth-library)\n",
346
+ "\n",
347
+ "Name_of_your_concept = \"\"\n",
348
+ "\n",
349
+ "# Leave empty if you want to name your concept the same as the current session.\n",
350
+ "\n",
351
+ "\n",
352
+ "Save_concept_to = \"My_Profile\"\n",
353
+ "\n",
354
+ "# Choices : \"Public_Library\", \"My_Profile\".\n",
355
+ "\n",
356
+ "\n",
357
+ "hf_token_write = \"\"\n",
358
+ "\n",
359
+ "# Create a write access token here : https://huggingface.co/settings/tokens, go to \"New token\" -> Role : Write, a regular read token won't work here.\n",
360
+ "\n",
361
+ "\n",
362
+ "#---------------------------------\n",
363
+ "hfv2(Name_of_your_concept, Save_concept_to, hf_token_write, INSTANCE_NAME, OUTPUT_DIR, Session_Name, MDLPTH)"
364
+ ]
365
+ },
366
+ {
367
+ "cell_type": "markdown",
368
+ "id": "881d80a3-4ebf-41bc-b68f-ac1cacb080f3",
369
+ "metadata": {},
370
+ "source": [
371
+ "# Free up space"
372
+ ]
373
+ },
374
+ {
375
+ "cell_type": "code",
376
+ "execution_count": null,
377
+ "id": "7403744d-cc45-419f-88ac-5475fa0f7f45",
378
+ "metadata": {},
379
+ "outputs": [],
380
+ "source": [
381
+ "# Display a list of sessions from which you can remove any session you don't need anymore\n",
382
+ "\n",
383
+ "#-------------------------\n",
384
+ "clean()"
385
+ ]
386
+ }
387
+ ],
388
+ "metadata": {
389
+ "kernelspec": {
390
+ "display_name": "Python 3 (ipykernel)",
391
+ "language": "python",
392
+ "name": "python3"
393
+ },
394
+ "language_info": {
395
+ "codemirror_mode": {
396
+ "name": "ipython",
397
+ "version": 3
398
+ },
399
+ "file_extension": ".py",
400
+ "mimetype": "text/x-python",
401
+ "name": "python",
402
+ "nbconvert_exporter": "python",
403
+ "pygments_lexer": "ipython3",
404
+ "version": "3.9.13"
405
+ }
406
+ },
407
+ "nbformat": 4,
408
+ "nbformat_minor": 5
409
+ }