crystantine commited on
Commit
6489309
·
verified ·
1 Parent(s): 0edb872

Upload 24 files

Browse files
Files changed (24) hide show
  1. README.md +250 -0
  2. advanced.png +3 -0
  3. app.py +1015 -0
  4. flags.png +3 -0
  5. flow.gif +3 -0
  6. icon.png +3 -0
  7. install.js +96 -0
  8. models/.gitkeep +0 -0
  9. models/clip/.gitkeep +0 -0
  10. models/unet/.gitkeep +0 -0
  11. models/vae/.gitkeep +0 -0
  12. outputs/.gitkeep +0 -0
  13. pinokio.js +95 -0
  14. pinokio_meta.json +38 -0
  15. publish_to_hf.png +3 -0
  16. requirements.txt +34 -0
  17. reset.js +13 -0
  18. sample.png +3 -0
  19. sample_fields.png +3 -0
  20. screenshot.png +3 -0
  21. seed.gif +3 -0
  22. start.js +34 -0
  23. torch.js +75 -0
  24. update.js +46 -0
README.md ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Flux Gym
2
+
3
+ Dead simple web UI for training FLUX LoRA **with LOW VRAM (12GB/16GB/20GB) support.**
4
+
5
+ - **Frontend:** The WebUI forked from [AI-Toolkit](https://github.com/ostris/ai-toolkit) (Gradio UI created by https://x.com/multimodalart)
6
+ - **Backend:** The Training script powered by [Kohya Scripts](https://github.com/kohya-ss/sd-scripts)
7
+
8
+ FluxGym supports 100% of Kohya sd-scripts features through an [Advanced](#advanced) tab, which is hidden by default.
9
+
10
+ ![screenshot.png](screenshot.png)
11
+
12
+ ---
13
+
14
+
15
+ # What is this?
16
+
17
+ 1. I wanted a super simple UI for training Flux LoRAs
18
+ 2. The [AI-Toolkit](https://github.com/ostris/ai-toolkit) project is great, and the gradio UI contribution by [@multimodalart](https://x.com/multimodalart) is perfect, but the project only works for 24GB VRAM.
19
+ 3. [Kohya Scripts](https://github.com/kohya-ss/sd-scripts) are very flexible and powerful for training FLUX, but you need to run in terminal.
20
+ 4. What if you could have the simplicity of AI-Toolkit WebUI and the flexibility of Kohya Scripts?
21
+ 5. Flux Gym was born. Supports 12GB, 16GB, 20GB VRAMs, and extensible since it uses Kohya Scripts underneath.
22
+
23
+ ---
24
+
25
+ # News
26
+
27
+ - September 16: Added "Publish to Huggingface" + 100% Kohya sd-scripts feature support: https://x.com/cocktailpeanut/status/1835719701172756592
28
+ - September 11: Automatic Sample Image Generation + Custom Resolution: https://x.com/cocktailpeanut/status/1833881392482066638
29
+
30
+ ---
31
+
32
+ # How people are using Fluxgym
33
+
34
+ Here are people using Fluxgym to locally train Lora sharing their experience:
35
+
36
+ https://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym
37
+
38
+
39
+ # More Info
40
+
41
+ To learn more, check out this X thread: https://x.com/cocktailpeanut/status/1832084951115972653
42
+
43
+ # Install
44
+
45
+ ## 1. One-Click Install
46
+
47
+ You can automatically install and launch everything locally with Pinokio 1-click launcher: https://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym
48
+
49
+
50
+ ## 2. Install Manually
51
+
52
+ First clone Fluxgym and kohya-ss/sd-scripts:
53
+
54
+ ```
55
+ git clone https://github.com/cocktailpeanut/fluxgym
56
+ cd fluxgym
57
+ git clone -b sd3 https://github.com/kohya-ss/sd-scripts
58
+ ```
59
+
60
+ Your folder structure will look like this:
61
+
62
+ ```
63
+ /fluxgym
64
+ app.py
65
+ requirements.txt
66
+ /sd-scripts
67
+ ```
68
+
69
+ Now activate a venv from the root `fluxgym` folder:
70
+
71
+ If you're on Windows:
72
+
73
+ ```
74
+ python -m venv env
75
+ env\Scripts\activate
76
+ ```
77
+
78
+ If your're on Linux:
79
+
80
+ ```
81
+ python -m venv env
82
+ source env/bin/activate
83
+ ```
84
+
85
+ This will create an `env` folder right below the `fluxgym` folder:
86
+
87
+ ```
88
+ /fluxgym
89
+ app.py
90
+ requirements.txt
91
+ /sd-scripts
92
+ /env
93
+ ```
94
+
95
+ Now go to the `sd-scripts` folder and install dependencies to the activated environment:
96
+
97
+ ```
98
+ cd sd-scripts
99
+ pip install -r requirements.txt
100
+ ```
101
+
102
+ Now come back to the root folder and install the app dependencies:
103
+
104
+ ```
105
+ cd ..
106
+ pip install -r requirements.txt
107
+ ```
108
+
109
+ Finally, install pytorch Nightly:
110
+
111
+ ```
112
+ pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
113
+ ```
114
+
115
+ Now let's download the model checkpoints.
116
+
117
+ First, download the following models under the `models/clip` foder:
118
+
119
+ - https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true
120
+ - https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors?download=true
121
+
122
+ Second, download the following model under the `models/vae` folder:
123
+
124
+ - https://huggingface.co/cocktailpeanut/xulf-dev/resolve/main/ae.sft?download=true
125
+
126
+ Finally, download the following model under the `models/unet` folder:
127
+
128
+ - https://huggingface.co/cocktailpeanut/xulf-dev/resolve/main/flux1-dev.sft?download=true
129
+
130
+ The result file structure will be something like:
131
+
132
+ ```
133
+ /models
134
+ /clip
135
+ clip_l.safetensors
136
+ t5xxl_fp16.safetensors
137
+ /unet
138
+ flux1-dev.sft
139
+ /vae
140
+ ae.sft
141
+ /sd-scripts
142
+ /outputs
143
+ /env
144
+ app.py
145
+ requirements.txt
146
+ ...
147
+ ```
148
+
149
+ # Start
150
+
151
+ Go back to the root `fluxgym` folder, with the venv activated, run:
152
+
153
+ ```
154
+ python app.py
155
+ ```
156
+
157
+ > Make sure to have the venv activated before running `python app.py`.
158
+ >
159
+ > Windows: `env/Scripts/activate`
160
+ > Linux: `source env/bin/activate`
161
+
162
+ # Usage
163
+
164
+ The usage is pretty straightforward:
165
+
166
+ 1. Enter the lora info
167
+ 2. Upload images and caption them (using the trigger word)
168
+ 3. Click "start".
169
+
170
+ That's all!
171
+
172
+ ![flow.gif](flow.gif)
173
+
174
+ # Configuration
175
+
176
+ ## Sample Images
177
+
178
+ By default fluxgym doesn't generate any sample images during training.
179
+
180
+ You can however configure Fluxgym to automatically generate sample images for every N steps. Here's what it looks like:
181
+
182
+ ![sample.png](sample.png)
183
+
184
+ To turn this on, just set the two fields:
185
+
186
+ 1. **Sample Image Prompts:** These prompts will be used to automatically generate images during training. If you want multiple, separate teach prompt with new line.
187
+ 2. **Sample Image Every N Steps:** If your "Expected training steps" is 960 and your "Sample Image Every N Steps" is 100, the images will be generated at step 100, 200, 300, 400, 500, 600, 700, 800, 900, for EACH prompt.
188
+
189
+ ![sample_fields.png](sample_fields.png)
190
+
191
+ ## Advanced Sample Images
192
+
193
+ Thanks to the built-in syntax from [kohya/sd-scripts](https://github.com/kohya-ss/sd-scripts?tab=readme-ov-file#sample-image-generation-during-training), you can control exactly how the sample images are generated during the training phase:
194
+
195
+ Let's say the trigger word is **hrld person.** Normally you would try sample prompts like:
196
+
197
+ ```
198
+ hrld person is riding a bike
199
+ hrld person is a body builder
200
+ hrld person is a rock star
201
+ ```
202
+
203
+ But for every prompt you can include **advanced flags** to fully control the image generation process. For example, the `--d` flag lets you specify the SEED.
204
+
205
+ Specifying a seed means every sample image will use that exact seed, which means you can literally see the LoRA evolve. Here's an example usage:
206
+
207
+ ```
208
+ hrld person is riding a bike --d 42
209
+ hrld person is a body builder --d 42
210
+ hrld person is a rock star --d 42
211
+ ```
212
+
213
+ Here's what it looks like in the UI:
214
+
215
+ ![flags.png](flags.png)
216
+
217
+ And here are the results:
218
+
219
+ ![seed.gif](seed.gif)
220
+
221
+ In addition to the `--d` flag, here are other flags you can use:
222
+
223
+
224
+ - `--n`: Negative prompt up to the next option.
225
+ - `--w`: Specifies the width of the generated image.
226
+ - `--h`: Specifies the height of the generated image.
227
+ - `--d`: Specifies the seed of the generated image.
228
+ - `--l`: Specifies the CFG scale of the generated image.
229
+ - `--s`: Specifies the number of steps in the generation.
230
+
231
+ The prompt weighting such as `( )` and `[ ]` also work. (Learn more about [Attention/Emphasis](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis))
232
+
233
+ ## Publishing to Huggingface
234
+
235
+ 1. Get your Huggingface Token from https://huggingface.co/settings/tokens
236
+ 2. Enter the token in the "Huggingface Token" field and click "Login". This will save the token text in a local file named `HF_TOKEN` (All local and private).
237
+ 3. Once you're logged in, you will be able to select a trained LoRA from the dropdown, edit the name if you want, and publish to Huggingface.
238
+
239
+ ![publish_to_hf.png](publish_to_hf.png)
240
+
241
+
242
+ ## Advanced
243
+
244
+ The advanced tab is automatically constructed by parsing the launch flags available to the latest version of [kohya sd-scripts](https://github.com/kohya-ss/sd-scripts). This means Fluxgym is a full fledged UI for using the Kohya script.
245
+
246
+ > By default the advanced tab is hidden. You can click the "advanced" accordion to expand it.
247
+
248
+ ![advanced.png](advanced.png)
249
+
250
+
advanced.png ADDED

Git LFS Details

  • SHA256: 15077625eb185463cc0dd383157879fe3b73ebb7305a40f5ed2af14a49bca41d
  • Pointer size: 131 Bytes
  • Size of remote file: 182 kB
app.py ADDED
@@ -0,0 +1,1015 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import subprocess
4
+ import gradio as gr
5
+ from PIL import Image
6
+ import torch
7
+ import uuid
8
+ import shutil
9
+ import json
10
+ import yaml
11
+ from slugify import slugify
12
+ from transformers import AutoProcessor, AutoModelForCausalLM
13
+ from gradio_logsview import LogsView, LogsViewRunner
14
+ from huggingface_hub import hf_hub_download, HfApi
15
+ from library import flux_train_utils, huggingface_util
16
+ from argparse import Namespace
17
+ import train_network
18
+ import toml
19
+ import re
20
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
21
+ os.environ['GRADIO_ANALYTICS_ENABLED'] = '0'
22
+ sys.path.insert(0, os.getcwd())
23
+ sys.path.append(os.path.join(os.path.dirname(__file__), 'sd-scripts'))
24
+ MAX_IMAGES = 150
25
+ def readme(lora_name, instance_prompt, sample_prompts):
26
+ base_model = "black-forest-labs/FLUX.1-dev"
27
+ license = "other"
28
+ license_name = "flux-1-dev-non-commercial-license"
29
+ license_link = "https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md"
30
+ tags = [ "text-to-image", "flux", "lora", "diffusers", "template:sd-lora", "fluxgym" ]
31
+
32
+ # widgets
33
+ widgets = []
34
+ sample_image_paths = []
35
+ output_name = slugify(lora_name)
36
+ samples_dir = resolve_path_without_quotes(f"outputs/{output_name}/sample")
37
+ for filename in os.listdir(samples_dir):
38
+ # Filename Schema: [name]_[steps]_[index]_[timestamp].png
39
+ match = re.search(r"_(\d+)_(\d+)_(\d+)\.png$", filename)
40
+ if match:
41
+ steps, index, timestamp = int(match.group(1)), int(match.group(2)), int(match.group(3))
42
+ sample_image_paths.append((steps, index, f"sample/{filename}"))
43
+
44
+ # Sort by numeric index
45
+ sample_image_paths.sort(key=lambda x: x[0], reverse=True)
46
+
47
+ final_sample_image_paths = sample_image_paths[:len(sample_prompts)]
48
+ final_sample_image_paths.sort(key=lambda x: x[1])
49
+ for i, prompt in enumerate(sample_prompts):
50
+ _, _, image_path = final_sample_image_paths[i]
51
+ widgets.append(
52
+ {
53
+ "text": prompt,
54
+ "output": {
55
+ "url": image_path
56
+ },
57
+ }
58
+ )
59
+ dtype = "torch.bfloat16"
60
+ # Construct the README content
61
+ readme_content = f"""---
62
+ tags:
63
+ {yaml.dump(tags, indent=4).strip()}
64
+ {"widget:" if os.path.isdir(samples_dir) else ""}
65
+ {yaml.dump(widgets, indent=4).strip() if widgets else ""}
66
+ base_model: {base_model}
67
+ {"instance_prompt: " + instance_prompt if instance_prompt else ""}
68
+ license: {license}
69
+ {'license_name: ' + license_name if license == "other" else ""}
70
+ {'license_link: ' + license_link if license == "other" else ""}
71
+ ---
72
+
73
+ # {lora_name}
74
+
75
+ A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
76
+
77
+ <Gallery />
78
+
79
+ ## Trigger words
80
+
81
+ {"You should use `" + instance_prompt + "` to trigger the image generation." if instance_prompt else "No trigger words defined."}
82
+
83
+ ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
84
+
85
+ Weights for this model are available in Safetensors format.
86
+
87
+ """
88
+ return readme_content
89
+
90
+ def account_hf():
91
+ try:
92
+ with open("HF_TOKEN", "r") as file:
93
+ token = file.read()
94
+ api = HfApi(token=token)
95
+ try:
96
+ account = api.whoami()
97
+ return { "token": token, "account": account['name'] }
98
+ except:
99
+ return None
100
+ except:
101
+ return None
102
+
103
+ """
104
+ hf_logout.click(fn=logout_hf, outputs=[hf_token, hf_login, hf_logout, repo_owner])
105
+ """
106
+ def logout_hf():
107
+ os.remove("HF_TOKEN")
108
+ global current_account
109
+ current_account = account_hf()
110
+ print(f"current_account={current_account}")
111
+ return gr.update(value=""), gr.update(visible=True), gr.update(visible=False), gr.update(value="", visible=False)
112
+
113
+
114
+ """
115
+ hf_login.click(fn=login_hf, inputs=[hf_token], outputs=[hf_token, hf_login, hf_logout, repo_owner])
116
+ """
117
+ def login_hf(hf_token):
118
+ api = HfApi(token=hf_token)
119
+ try:
120
+ account = api.whoami()
121
+ if account != None:
122
+ if "name" in account:
123
+ with open("HF_TOKEN", "w") as file:
124
+ file.write(hf_token)
125
+ global current_account
126
+ current_account = account_hf()
127
+ return gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), gr.update(value=current_account["account"], visible=True)
128
+ return gr.update(), gr.update(), gr.update(), gr.update()
129
+ except:
130
+ print(f"incorrect hf_token")
131
+ return gr.update(), gr.update(), gr.update(), gr.update()
132
+
133
+ def upload_hf(lora_rows, repo_owner, repo_name, repo_visibility, hf_token):
134
+ src = lora_rows
135
+ repo_id = f"{repo_owner}/{repo_name}"
136
+ gr.Info(f"Uploading to Huggingface. Please Stand by...", duration=None)
137
+ print(f"repo_id={repo_id} repo_visibility={repo_visibility} src={src}")
138
+ lora_name = os.path.basename(src)
139
+ dataset_toml_path = os.path.normpath(os.path.join(src, "dataset.toml"))
140
+ print(f"lora_name={lora_name}, dataset_toml_path={dataset_toml_path}")
141
+ with open(dataset_toml_path, 'r') as f:
142
+ config = toml.load(f)
143
+ concept_sentence = config['datasets'][0]['subsets'][0]['class_tokens']
144
+ print(f"concept_sentence={concept_sentence}")
145
+ # Generate README
146
+ output_name = slugify(lora_name)
147
+ print(f"lora_name {lora_name}, concept_sentence={concept_sentence}, output_name={output_name}")
148
+ sample_prompts_path = resolve_path_without_quotes(f"outputs/{output_name}/sample_prompts.txt")
149
+ with open(sample_prompts_path, "r", encoding="utf-8") as f:
150
+ lines = f.readlines()
151
+ sample_prompts = [line.strip() for line in lines if len(line.strip()) > 0 and line[0] != "#"]
152
+ md = readme(lora_name, concept_sentence, sample_prompts)
153
+ # Write README
154
+ readme_path = resolve_path_without_quotes(f"outputs/{output_name}/README.md")
155
+ with open(readme_path, "w", encoding="utf-8") as f:
156
+ f.write(md)
157
+ args = Namespace(
158
+ huggingface_repo_id=repo_id,
159
+ huggingface_repo_type="model",
160
+ huggingface_repo_visibility=repo_visibility,
161
+ huggingface_path_in_repo="",
162
+ huggingface_token=hf_token,
163
+ async_upload=False
164
+ )
165
+ print(f"upload_hf args={args}")
166
+ huggingface_util.upload(args=args, src=src)
167
+ gr.Info(f"[Upload Complete] https://huggingface.co/{repo_id}", duration=None)
168
+
169
+ def load_captioning(uploaded_files, concept_sentence):
170
+ uploaded_images = [file for file in uploaded_files if not file.endswith('.txt')]
171
+ txt_files = [file for file in uploaded_files if file.endswith('.txt')]
172
+ txt_files_dict = {os.path.splitext(os.path.basename(txt_file))[0]: txt_file for txt_file in txt_files}
173
+ updates = []
174
+ if len(uploaded_images) <= 1:
175
+ raise gr.Error(
176
+ "Please upload at least 2 images to train your model (the ideal number with default settings is between 4-30)"
177
+ )
178
+ elif len(uploaded_images) > MAX_IMAGES:
179
+ raise gr.Error(f"For now, only {MAX_IMAGES} or less images are allowed for training")
180
+ # Update for the captioning_area
181
+ # for _ in range(3):
182
+ updates.append(gr.update(visible=True))
183
+ # Update visibility and image for each captioning row and image
184
+ for i in range(1, MAX_IMAGES + 1):
185
+ # Determine if the current row and image should be visible
186
+ visible = i <= len(uploaded_images)
187
+
188
+ # Update visibility of the captioning row
189
+ updates.append(gr.update(visible=visible))
190
+
191
+ # Update for image component - display image if available, otherwise hide
192
+ image_value = uploaded_images[i - 1] if visible else None
193
+ updates.append(gr.update(value=image_value, visible=visible))
194
+
195
+ corresponding_caption = False
196
+ if(image_value):
197
+ base_name = os.path.splitext(os.path.basename(image_value))[0]
198
+ if base_name in txt_files_dict:
199
+ with open(txt_files_dict[base_name], 'r') as file:
200
+ corresponding_caption = file.read()
201
+
202
+ # Update value of captioning area
203
+ text_value = corresponding_caption if visible and corresponding_caption else concept_sentence if visible and concept_sentence else None
204
+ updates.append(gr.update(value=text_value, visible=visible))
205
+
206
+ # Update for the sample caption area
207
+ updates.append(gr.update(visible=True))
208
+ updates.append(gr.update(visible=True))
209
+
210
+ return updates
211
+
212
+ def hide_captioning():
213
+ return gr.update(visible=False), gr.update(visible=False)
214
+
215
+ def resize_image(image_path, output_path, size):
216
+ with Image.open(image_path) as img:
217
+ width, height = img.size
218
+ if width < height:
219
+ new_width = size
220
+ new_height = int((size/width) * height)
221
+ else:
222
+ new_height = size
223
+ new_width = int((size/height) * width)
224
+ print(f"resize {image_path} : {new_width}x{new_height}")
225
+ img_resized = img.resize((new_width, new_height), Image.Resampling.LANCZOS)
226
+ img_resized.save(output_path)
227
+
228
+ def create_dataset(destination_folder, size, *inputs):
229
+ print("Creating dataset")
230
+ images = inputs[0]
231
+ if not os.path.exists(destination_folder):
232
+ os.makedirs(destination_folder)
233
+
234
+ for index, image in enumerate(images):
235
+ # copy the images to the datasets folder
236
+ new_image_path = shutil.copy(image, destination_folder)
237
+
238
+ # if it's a caption text file skip the next bit
239
+ ext = os.path.splitext(new_image_path)[-1].lower()
240
+ if ext == '.txt':
241
+ continue
242
+
243
+ # resize the images
244
+ resize_image(new_image_path, new_image_path, size)
245
+
246
+ # copy the captions
247
+
248
+ original_caption = inputs[index + 1]
249
+
250
+ image_file_name = os.path.basename(new_image_path)
251
+ caption_file_name = os.path.splitext(image_file_name)[0] + ".txt"
252
+ caption_path = resolve_path_without_quotes(os.path.join(destination_folder, caption_file_name))
253
+ print(f"image_path={new_image_path}, caption_path = {caption_path}, original_caption={original_caption}")
254
+ with open(caption_path, 'w') as file:
255
+ file.write(original_caption)
256
+
257
+ print(f"destination_folder {destination_folder}")
258
+ return destination_folder
259
+
260
+
261
+ def run_captioning(images, concept_sentence, *captions):
262
+ print(f"run_captioning")
263
+ print(f"concept sentence {concept_sentence}")
264
+ print(f"captions {captions}")
265
+ #Load internally to not consume resources for training
266
+ device = "cuda" if torch.cuda.is_available() else "cpu"
267
+ print(f"device={device}")
268
+ torch_dtype = torch.float16
269
+ model = AutoModelForCausalLM.from_pretrained(
270
+ "multimodalart/Florence-2-large-no-flash-attn", torch_dtype=torch_dtype, trust_remote_code=True
271
+ ).to(device)
272
+ processor = AutoProcessor.from_pretrained("multimodalart/Florence-2-large-no-flash-attn", trust_remote_code=True)
273
+
274
+ captions = list(captions)
275
+ for i, image_path in enumerate(images):
276
+ print(captions[i])
277
+ if isinstance(image_path, str): # If image is a file path
278
+ image = Image.open(image_path).convert("RGB")
279
+
280
+ prompt = "<DETAILED_CAPTION>"
281
+ inputs = processor(text=prompt, images=image, return_tensors="pt").to(device, torch_dtype)
282
+ print(f"inputs {inputs}")
283
+
284
+ generated_ids = model.generate(
285
+ input_ids=inputs["input_ids"], pixel_values=inputs["pixel_values"], max_new_tokens=1024, num_beams=3
286
+ )
287
+ print(f"generated_ids {generated_ids}")
288
+
289
+ generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]
290
+ print(f"generated_text: {generated_text}")
291
+ parsed_answer = processor.post_process_generation(
292
+ generated_text, task=prompt, image_size=(image.width, image.height)
293
+ )
294
+ print(f"parsed_answer = {parsed_answer}")
295
+ caption_text = parsed_answer["<DETAILED_CAPTION>"].replace("The image shows ", "")
296
+ print(f"caption_text = {caption_text}, concept_sentence={concept_sentence}")
297
+ if concept_sentence:
298
+ caption_text = f"{concept_sentence} {caption_text}"
299
+ captions[i] = caption_text
300
+
301
+ yield captions
302
+ model.to("cpu")
303
+ del model
304
+ del processor
305
+ if torch.cuda.is_available():
306
+ torch.cuda.empty_cache()
307
+
308
+ def recursive_update(d, u):
309
+ for k, v in u.items():
310
+ if isinstance(v, dict) and v:
311
+ d[k] = recursive_update(d.get(k, {}), v)
312
+ else:
313
+ d[k] = v
314
+ return d
315
+
316
+
317
+ def resolve_path(p):
318
+ current_dir = os.path.dirname(os.path.abspath(__file__))
319
+ norm_path = os.path.normpath(os.path.join(current_dir, p))
320
+ return f"\"{norm_path}\""
321
+ def resolve_path_without_quotes(p):
322
+ current_dir = os.path.dirname(os.path.abspath(__file__))
323
+ norm_path = os.path.normpath(os.path.join(current_dir, p))
324
+ return norm_path
325
+
326
+ def gen_sh(
327
+ output_name,
328
+ resolution,
329
+ seed,
330
+ workers,
331
+ learning_rate,
332
+ network_dim,
333
+ max_train_epochs,
334
+ save_every_n_epochs,
335
+ timestep_sampling,
336
+ guidance_scale,
337
+ vram,
338
+ sample_prompts,
339
+ sample_every_n_steps,
340
+ *advanced_components
341
+ ):
342
+
343
+ print(f"gen_sh: network_dim:{network_dim}, max_train_epochs={max_train_epochs}, save_every_n_epochs={save_every_n_epochs}, timestep_sampling={timestep_sampling}, guidance_scale={guidance_scale}, vram={vram}, sample_prompts={sample_prompts}, sample_every_n_steps={sample_every_n_steps}")
344
+
345
+ output_dir = resolve_path(f"outputs/{output_name}")
346
+ sample_prompts_path = resolve_path(f"outputs/{output_name}/sample_prompts.txt")
347
+
348
+ line_break = "\\"
349
+ file_type = "sh"
350
+ if sys.platform == "win32":
351
+ line_break = "^"
352
+ file_type = "bat"
353
+
354
+ ############# Sample args ########################
355
+ sample = ""
356
+ if len(sample_prompts) > 0 and sample_every_n_steps > 0:
357
+ sample = f"""--sample_prompts={sample_prompts_path} --sample_every_n_steps="{sample_every_n_steps}" {line_break}"""
358
+
359
+
360
+ ############# Optimizer args ########################
361
+ if vram == "16G":
362
+ # 16G VRAM
363
+ optimizer = f"""--optimizer_type adafactor {line_break}
364
+ --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" {line_break}
365
+ --lr_scheduler constant_with_warmup {line_break}
366
+ --max_grad_norm 0.0 {line_break}"""
367
+ elif vram == "12G":
368
+ # 12G VRAM
369
+ optimizer = f"""--optimizer_type adafactor {line_break}
370
+ --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" {line_break}
371
+ --split_mode {line_break}
372
+ --network_args "train_blocks=single" {line_break}
373
+ --lr_scheduler constant_with_warmup {line_break}
374
+ --max_grad_norm 0.0 {line_break}"""
375
+ else:
376
+ # 20G+ VRAM
377
+ optimizer = f"--optimizer_type adamw8bit {line_break}"
378
+
379
+
380
+ #######################################################
381
+ pretrained_model_path = resolve_path("models/unet/flux1-dev.sft")
382
+ clip_path = resolve_path("models/clip/clip_l.safetensors")
383
+ t5_path = resolve_path("models/clip/t5xxl_fp16.safetensors")
384
+ ae_path = resolve_path("models/vae/ae.sft")
385
+ sh = f"""accelerate launch {line_break}
386
+ --mixed_precision bf16 {line_break}
387
+ --num_cpu_threads_per_process 1 {line_break}
388
+ sd-scripts/flux_train_network.py {line_break}
389
+ --pretrained_model_name_or_path {pretrained_model_path} {line_break}
390
+ --clip_l {clip_path} {line_break}
391
+ --t5xxl {t5_path} {line_break}
392
+ --ae {ae_path} {line_break}
393
+ --cache_latents_to_disk {line_break}
394
+ --save_model_as safetensors {line_break}
395
+ --sdpa --persistent_data_loader_workers {line_break}
396
+ --max_data_loader_n_workers {workers} {line_break}
397
+ --seed {seed} {line_break}
398
+ --gradient_checkpointing {line_break}
399
+ --mixed_precision bf16 {line_break}
400
+ --save_precision bf16 {line_break}
401
+ --network_module networks.lora_flux {line_break}
402
+ --network_dim {network_dim} {line_break}
403
+ {optimizer}{sample}
404
+ --learning_rate {learning_rate} {line_break}
405
+ --cache_text_encoder_outputs {line_break}
406
+ --cache_text_encoder_outputs_to_disk {line_break}
407
+ --fp8_base {line_break}
408
+ --highvram {line_break}
409
+ --max_train_epochs {max_train_epochs} {line_break}
410
+ --save_every_n_epochs {save_every_n_epochs} {line_break}
411
+ --dataset_config {resolve_path(f"outputs/{output_name}/dataset.toml")} {line_break}
412
+ --output_dir {output_dir} {line_break}
413
+ --output_name {output_name} {line_break}
414
+ --timestep_sampling {timestep_sampling} {line_break}
415
+ --discrete_flow_shift 3.1582 {line_break}
416
+ --model_prediction_type raw {line_break}
417
+ --guidance_scale {guidance_scale} {line_break}
418
+ --loss_type l2 {line_break}"""
419
+
420
+
421
+
422
+ ############# Advanced args ########################
423
+ global advanced_component_ids
424
+ global original_advanced_component_values
425
+
426
+ # check dirty
427
+ print(f"original_advanced_component_values = {original_advanced_component_values}")
428
+ advanced_flags = []
429
+ for i, current_value in enumerate(advanced_components):
430
+ # print(f"compare {advanced_component_ids[i]}: old={original_advanced_component_values[i]}, new={current_value}")
431
+ if original_advanced_component_values[i] != current_value:
432
+ # dirty
433
+ if current_value == True:
434
+ # Boolean
435
+ advanced_flags.append(advanced_component_ids[i])
436
+ else:
437
+ # string
438
+ advanced_flags.append(f"{advanced_component_ids[i]} {current_value}")
439
+
440
+ if len(advanced_flags) > 0:
441
+ advanced_flags_str = f" {line_break}\n ".join(advanced_flags)
442
+ sh = sh + "\n " + advanced_flags_str
443
+
444
+ return sh
445
+
446
+ def gen_toml(
447
+ dataset_folder,
448
+ resolution,
449
+ class_tokens,
450
+ num_repeats
451
+ ):
452
+ toml = f"""[general]
453
+ shuffle_caption = false
454
+ caption_extension = '.txt'
455
+ keep_tokens = 1
456
+
457
+ [[datasets]]
458
+ resolution = {resolution}
459
+ batch_size = 1
460
+ keep_tokens = 1
461
+
462
+ [[datasets.subsets]]
463
+ image_dir = '{resolve_path_without_quotes(dataset_folder)}'
464
+ class_tokens = '{class_tokens}'
465
+ num_repeats = {num_repeats}"""
466
+ return toml
467
+
468
+ def update_total_steps(max_train_epochs, num_repeats, images):
469
+ try:
470
+ num_images = len(images)
471
+ total_steps = max_train_epochs * num_images * num_repeats
472
+ print(f"max_train_epochs={max_train_epochs} num_images={num_images}, num_repeats={num_repeats}, total_steps={total_steps}")
473
+ return gr.update(value = total_steps)
474
+ except:
475
+ print("")
476
+
477
+ def set_repo(lora_rows):
478
+ selected_name = os.path.basename(lora_rows)
479
+ return gr.update(value=selected_name)
480
+
481
+ def get_loras():
482
+ try:
483
+ outputs_path = resolve_path_without_quotes(f"outputs")
484
+ files = os.listdir(outputs_path)
485
+ folders = [os.path.join(outputs_path, item) for item in files if os.path.isdir(os.path.join(outputs_path, item)) and item != "sample"]
486
+ folders.sort(key=lambda file: os.path.getctime(file), reverse=True)
487
+ return folders
488
+ except Exception as e:
489
+ return []
490
+
491
+ def get_samples(lora_name):
492
+ output_name = slugify(lora_name)
493
+ try:
494
+ samples_path = resolve_path_without_quotes(f"outputs/{output_name}/sample")
495
+ files = [os.path.join(samples_path, file) for file in os.listdir(samples_path)]
496
+ files.sort(key=lambda file: os.path.getctime(file), reverse=True)
497
+ return files
498
+ except:
499
+ return []
500
+
501
+ def start_training(
502
+ lora_name,
503
+ train_script,
504
+ train_config,
505
+ sample_prompts,
506
+ ):
507
+ # write custom script and toml
508
+ os.makedirs("models", exist_ok=True)
509
+ os.makedirs("outputs", exist_ok=True)
510
+ output_name = slugify(lora_name)
511
+ output_dir = resolve_path_without_quotes(f"outputs/{output_name}")
512
+ os.makedirs(output_dir, exist_ok=True)
513
+
514
+
515
+ file_type = "sh"
516
+ if sys.platform == "win32":
517
+ file_type = "bat"
518
+
519
+ sh_filename = f"train.{file_type}"
520
+ sh_filepath = resolve_path_without_quotes(f"outputs/{output_name}/{sh_filename}")
521
+ with open(sh_filepath, 'w', encoding="utf-8") as file:
522
+ file.write(train_script)
523
+ gr.Info(f"Generated train script at {sh_filename}")
524
+
525
+
526
+ dataset_path = resolve_path_without_quotes(f"outputs/{output_name}/dataset.toml")
527
+ with open(dataset_path, 'w', encoding="utf-8") as file:
528
+ file.write(train_config)
529
+ gr.Info(f"Generated dataset.toml")
530
+
531
+ sample_prompts_path = resolve_path_without_quotes(f"outputs/{output_name}/sample_prompts.txt")
532
+ with open(sample_prompts_path, 'w', encoding='utf-8') as file:
533
+ file.write(sample_prompts)
534
+ gr.Info(f"Generated sample_prompts.txt")
535
+
536
+ # Train
537
+ if sys.platform == "win32":
538
+ command = sh_filepath
539
+ else:
540
+ command = f"bash \"{sh_filepath}\""
541
+
542
+ # Use Popen to run the command and capture output in real-time
543
+ env = os.environ.copy()
544
+ env['PYTHONIOENCODING'] = 'utf-8'
545
+ runner = LogsViewRunner()
546
+ cwd = os.path.dirname(os.path.abspath(__file__))
547
+ gr.Info(f"Started training")
548
+ yield from runner.run_command([command], cwd=cwd)
549
+ yield runner.log(f"Runner: {runner}")
550
+ gr.Info(f"Training Complete. Check the outputs folder for the LoRA files.", duration=None)
551
+
552
+
553
+ def update(
554
+ lora_name,
555
+ resolution,
556
+ seed,
557
+ workers,
558
+ class_tokens,
559
+ learning_rate,
560
+ network_dim,
561
+ max_train_epochs,
562
+ save_every_n_epochs,
563
+ timestep_sampling,
564
+ guidance_scale,
565
+ vram,
566
+ num_repeats,
567
+ sample_prompts,
568
+ sample_every_n_steps,
569
+ *advanced_components,
570
+ ):
571
+ output_name = slugify(lora_name)
572
+ dataset_folder = str(f"datasets/{output_name}")
573
+ sh = gen_sh(
574
+ output_name,
575
+ resolution,
576
+ seed,
577
+ workers,
578
+ learning_rate,
579
+ network_dim,
580
+ max_train_epochs,
581
+ save_every_n_epochs,
582
+ timestep_sampling,
583
+ guidance_scale,
584
+ vram,
585
+ sample_prompts,
586
+ sample_every_n_steps,
587
+ *advanced_components,
588
+ )
589
+ toml = gen_toml(
590
+ dataset_folder,
591
+ resolution,
592
+ class_tokens,
593
+ num_repeats
594
+ )
595
+ return gr.update(value=sh), gr.update(value=toml), dataset_folder
596
+
597
+ """
598
+ demo.load(fn=loaded, js=js, outputs=[hf_token, hf_login, hf_logout, hf_account])
599
+ """
600
+ def loaded():
601
+ global current_account
602
+ current_account = account_hf()
603
+ print(f"current_account={current_account}")
604
+ if current_account != None:
605
+ return gr.update(value=current_account["token"]), gr.update(visible=False), gr.update(visible=True), gr.update(value=current_account["account"], visible=True)
606
+ else:
607
+ return gr.update(value=""), gr.update(visible=True), gr.update(visible=False), gr.update(value="", visible=False)
608
+
609
+ def update_sample(concept_sentence):
610
+ return gr.update(value=concept_sentence)
611
+
612
+ def refresh_publish_tab():
613
+ loras = get_loras()
614
+ return gr.Dropdown(label="Trained LoRAs", choices=loras)
615
+
616
+ def init_advanced():
617
+ # if basic_args
618
+ basic_args = {
619
+ 'pretrained_model_name_or_path',
620
+ 'clip_l',
621
+ 't5xxl',
622
+ 'ae',
623
+ 'cache_latents_to_disk',
624
+ 'save_model_as',
625
+ 'sdpa',
626
+ 'persistent_data_loader_workers',
627
+ 'max_data_loader_n_workers',
628
+ 'seed',
629
+ 'gradient_checkpointing',
630
+ 'mixed_precision',
631
+ 'save_precision',
632
+ 'network_module',
633
+ 'network_dim',
634
+ 'learning_rate',
635
+ 'cache_text_encoder_outputs',
636
+ 'cache_text_encoder_outputs_to_disk',
637
+ 'fp8_base',
638
+ 'highvram',
639
+ 'max_train_epochs',
640
+ 'save_every_n_epochs',
641
+ 'dataset_config',
642
+ 'output_dir',
643
+ 'output_name',
644
+ 'timestep_sampling',
645
+ 'discrete_flow_shift',
646
+ 'model_prediction_type',
647
+ 'guidance_scale',
648
+ 'loss_type',
649
+ 'optimizer_type',
650
+ 'optimizer_args',
651
+ 'lr_scheduler',
652
+ 'sample_prompts',
653
+ 'sample_every_n_steps',
654
+ 'max_grad_norm',
655
+ 'split_mode',
656
+ 'network_args'
657
+ }
658
+
659
+ # generate a UI config
660
+ # if not in basic_args, create a simple form
661
+ parser = train_network.setup_parser()
662
+ flux_train_utils.add_flux_train_arguments(parser)
663
+ args_info = {}
664
+ for action in parser._actions:
665
+ if action.dest != 'help': # Skip the default help argument
666
+ # if the dest is included in basic_args
667
+ args_info[action.dest] = {
668
+ "action": action.option_strings, # Option strings like '--use_8bit_adam'
669
+ "type": action.type, # Type of the argument
670
+ "help": action.help, # Help message
671
+ "default": action.default, # Default value, if any
672
+ "required": action.required # Whether the argument is required
673
+ }
674
+ temp = []
675
+ for key in args_info:
676
+ temp.append({ 'key': key, 'action': args_info[key] })
677
+ temp.sort(key=lambda x: x['key'])
678
+ advanced_component_ids = []
679
+ advanced_components = []
680
+ for item in temp:
681
+ key = item['key']
682
+ action = item['action']
683
+ if key in basic_args:
684
+ print("")
685
+ else:
686
+ action_type = str(action['type'])
687
+ component = None
688
+ with gr.Column(min_width=300):
689
+ if action_type == "None":
690
+ # radio
691
+ component = gr.Checkbox()
692
+ # elif action_type == "<class 'str'>":
693
+ # component = gr.Textbox()
694
+ # elif action_type == "<class 'int'>":
695
+ # component = gr.Number(precision=0)
696
+ # elif action_type == "<class 'float'>":
697
+ # component = gr.Number()
698
+ # elif "int_or_float" in action_type:
699
+ # component = gr.Number()
700
+ else:
701
+ component = gr.Textbox(value="")
702
+ if component != None:
703
+ component.interactive = True
704
+ component.elem_id = action['action'][0]
705
+ component.label = component.elem_id
706
+ component.elem_classes = ["advanced"]
707
+ if action['help'] != None:
708
+ component.info = action['help']
709
+ advanced_components.append(component)
710
+ advanced_component_ids.append(component.elem_id)
711
+ return advanced_components, advanced_component_ids
712
+
713
+
714
+ theme = gr.themes.Monochrome(
715
+ text_size=gr.themes.Size(lg="18px", md="15px", sm="13px", xl="22px", xs="12px", xxl="24px", xxs="9px"),
716
+ font=[gr.themes.GoogleFont("Source Sans Pro"), "ui-sans-serif", "system-ui", "sans-serif"],
717
+ )
718
+ css = """
719
+ @keyframes rotate {
720
+ 0% {
721
+ transform: rotate(0deg);
722
+ }
723
+ 100% {
724
+ transform: rotate(360deg);
725
+ }
726
+ }
727
+ #advanced_options .advanced:nth-child(even) { background: rgba(0,0,100,0.04) !important; }
728
+ h1{font-family: georgia; font-style: italic; font-weight: bold; font-size: 30px; letter-spacing: -1px;}
729
+ h3{margin-top: 0}
730
+ .tabitem{border: 0px}
731
+ .group_padding{}
732
+ nav{position: fixed; top: 0; left: 0; right: 0; z-index: 1000; text-align: center; padding: 10px; box-sizing: border-box; display: flex; align-items: center; backdrop-filter: blur(10px); }
733
+ nav button { background: none; color: firebrick; font-weight: bold; border: 2px solid firebrick; padding: 5px 10px; border-radius: 5px; font-size: 14px; }
734
+ nav img { height: 40px; width: 40px; border-radius: 40px; }
735
+ nav img.rotate { animation: rotate 2s linear infinite; }
736
+ .flexible { flex-grow: 1; }
737
+ .tast-details { margin: 10px 0 !important; }
738
+ .toast-wrap { bottom: var(--size-4) !important; top: auto !important; border: none !important; backdrop-filter: blur(10px); }
739
+ .toast-title, .toast-text, .toast-icon, .toast-close { color: black !important; font-size: 14px; }
740
+ .toast-body { border: none !important; }
741
+ #terminal { box-shadow: none !important; margin-bottom: 25px; background: rgba(0,0,0,0.03); }
742
+ #terminal .generating { border: none !important; }
743
+ #terminal label { position: absolute !important; }
744
+ .tabs { margin-top: 50px; }
745
+ .hidden { display: none !important; }
746
+ .codemirror-wrapper .cm-line { font-size: 12px !important; }
747
+ label { font-weight: bold !important; }
748
+ """
749
+
750
+ js = """
751
+ function() {
752
+ let autoscroll = document.querySelector("#autoscroll")
753
+ if (window.iidxx) {
754
+ window.clearInterval(window.iidxx);
755
+ }
756
+ window.iidxx = window.setInterval(function() {
757
+ let text=document.querySelector(".codemirror-wrapper .cm-line").innerText.trim()
758
+ let img = document.querySelector("#logo")
759
+ if (text.length > 0) {
760
+ autoscroll.classList.remove("hidden")
761
+ if (autoscroll.classList.contains("on")) {
762
+ autoscroll.textContent = "Autoscroll ON"
763
+ window.scrollTo(0, document.body.scrollHeight, { behavior: "smooth" });
764
+ img.classList.add("rotate")
765
+ } else {
766
+ autoscroll.textContent = "Autoscroll OFF"
767
+ img.classList.remove("rotate")
768
+ }
769
+ }
770
+ }, 500);
771
+ console.log("autoscroll", autoscroll)
772
+ autoscroll.addEventListener("click", (e) => {
773
+ autoscroll.classList.toggle("on")
774
+ })
775
+ function debounce(fn, delay) {
776
+ let timeoutId;
777
+ return function(...args) {
778
+ clearTimeout(timeoutId);
779
+ timeoutId = setTimeout(() => fn(...args), delay);
780
+ };
781
+ }
782
+
783
+ function handleClick() {
784
+ console.log("refresh")
785
+ document.querySelector("#refresh").click();
786
+ }
787
+ const debouncedClick = debounce(handleClick, 1000);
788
+ document.addEventListener("input", debouncedClick);
789
+
790
+ }
791
+ """
792
+
793
+ current_account = account_hf()
794
+ print(f"current_account={current_account}")
795
+
796
+ with gr.Blocks(elem_id="app", theme=theme, css=css, fill_width=True) as demo:
797
+ with gr.Tabs() as tabs:
798
+ with gr.TabItem("Gym"):
799
+ output_components = []
800
+ with gr.Row():
801
+ gr.HTML("""<nav>
802
+ <img id='logo' src='/file=icon.png' width='80' height='80'>
803
+ <div class='flexible'></div>
804
+ <button id='autoscroll' class='on hidden'></button>
805
+ </nav>
806
+ """)
807
+ with gr.Row(elem_id='container'):
808
+ with gr.Column():
809
+ gr.Markdown(
810
+ """# Step 1. LoRA Info
811
+ <p style="margin-top:0">Configure your LoRA train settings.</p>
812
+ """, elem_classes="group_padding")
813
+ lora_name = gr.Textbox(
814
+ label="The name of your LoRA",
815
+ info="This has to be a unique name",
816
+ placeholder="e.g.: Persian Miniature Painting style, Cat Toy",
817
+ )
818
+ concept_sentence = gr.Textbox(
819
+ elem_id="--concept_sentence",
820
+ label="Trigger word/sentence",
821
+ info="Trigger word or sentence to be used",
822
+ placeholder="uncommon word like p3rs0n or trtcrd, or sentence like 'in the style of CNSTLL'",
823
+ interactive=True,
824
+ )
825
+ vram = gr.Radio(["20G", "16G", "12G" ], value="20G", label="VRAM", interactive=True)
826
+ num_repeats = gr.Number(value=10, precision=0, label="Repeat trains per image", interactive=True)
827
+ max_train_epochs = gr.Number(label="Max Train Epochs", value=16, interactive=True)
828
+ total_steps = gr.Number(0, interactive=False, label="Expected training steps")
829
+ sample_prompts = gr.Textbox("", lines=5, label="Sample Image Prompts (Separate with new lines)", interactive=True)
830
+ sample_every_n_steps = gr.Number(0, precision=0, label="Sample Image Every N Steps", interactive=True)
831
+ resolution = gr.Number(value=512, precision=0, label="Resize dataset images")
832
+ with gr.Column():
833
+ gr.Markdown(
834
+ """# Step 2. Dataset
835
+ <p style="margin-top:0">Make sure the captions include the trigger word.</p>
836
+ """, elem_classes="group_padding")
837
+ with gr.Group():
838
+ images = gr.File(
839
+ file_types=["image", ".txt"],
840
+ label="Upload your images",
841
+ file_count="multiple",
842
+ interactive=True,
843
+ visible=True,
844
+ scale=1,
845
+ )
846
+ with gr.Group(visible=False) as captioning_area:
847
+ do_captioning = gr.Button("Add AI captions with Florence-2")
848
+ output_components.append(captioning_area)
849
+ #output_components = [captioning_area]
850
+ caption_list = []
851
+ for i in range(1, MAX_IMAGES + 1):
852
+ locals()[f"captioning_row_{i}"] = gr.Row(visible=False)
853
+ with locals()[f"captioning_row_{i}"]:
854
+ locals()[f"image_{i}"] = gr.Image(
855
+ type="filepath",
856
+ width=111,
857
+ height=111,
858
+ min_width=111,
859
+ interactive=False,
860
+ scale=2,
861
+ show_label=False,
862
+ show_share_button=False,
863
+ show_download_button=False,
864
+ )
865
+ locals()[f"caption_{i}"] = gr.Textbox(
866
+ label=f"Caption {i}", scale=15, interactive=True
867
+ )
868
+
869
+ output_components.append(locals()[f"captioning_row_{i}"])
870
+ output_components.append(locals()[f"image_{i}"])
871
+ output_components.append(locals()[f"caption_{i}"])
872
+ caption_list.append(locals()[f"caption_{i}"])
873
+ with gr.Column():
874
+ gr.Markdown(
875
+ """# Step 3. Train
876
+ <p style="margin-top:0">Press start to start training.</p>
877
+ """, elem_classes="group_padding")
878
+ refresh = gr.Button("Refresh", elem_id="refresh", visible=False)
879
+ start = gr.Button("Start training", visible=False)
880
+ output_components.append(start)
881
+ train_script = gr.Textbox(label="Train script", max_lines=100, interactive=True)
882
+ train_config = gr.Textbox(label="Train config", max_lines=100, interactive=True)
883
+ with gr.Accordion("Advanced options", elem_id='advanced_options', open=False):
884
+ with gr.Row():
885
+ with gr.Column(min_width=300):
886
+ seed = gr.Number(label="--seed", info="Seed", value=42, interactive=True)
887
+ with gr.Column(min_width=300):
888
+ workers = gr.Number(label="--max_data_loader_n_workers", info="Number of Workers", value=2, interactive=True)
889
+ with gr.Column(min_width=300):
890
+ learning_rate = gr.Textbox(label="--learning_rate", info="Learning Rate", value="8e-4", interactive=True)
891
+ with gr.Column(min_width=300):
892
+ save_every_n_epochs = gr.Number(label="--save_every_n_epochs", info="Save every N epochs", value=4, interactive=True)
893
+ with gr.Column(min_width=300):
894
+ guidance_scale = gr.Number(label="--guidance_scale", info="Guidance Scale", value=1.0, interactive=True)
895
+ with gr.Column(min_width=300):
896
+ timestep_sampling = gr.Textbox(label="--timestep_sampling", info="Timestep Sampling", value="shift", interactive=True)
897
+ with gr.Column(min_width=300):
898
+ network_dim = gr.Number(label="--network_dim", info="LoRA Rank", value=4, minimum=4, maximum=128, step=4, interactive=True)
899
+ advanced_components, advanced_component_ids = init_advanced()
900
+ with gr.Row():
901
+ terminal = LogsView(label="Train log", elem_id="terminal")
902
+ with gr.Row():
903
+ gallery = gr.Gallery(get_samples, inputs=[lora_name], label="Samples", every=10, columns=6)
904
+
905
+ with gr.TabItem("Publish") as publish_tab:
906
+ hf_token = gr.Textbox(label="Huggingface Token")
907
+ hf_login = gr.Button("Login")
908
+ hf_logout = gr.Button("Logout")
909
+ with gr.Row() as row:
910
+ gr.Markdown("**LoRA**")
911
+ gr.Markdown("**Upload**")
912
+ loras = get_loras()
913
+ with gr.Row():
914
+ lora_rows = refresh_publish_tab()
915
+ with gr.Column():
916
+ with gr.Row():
917
+ repo_owner = gr.Textbox(label="Account", interactive=False)
918
+ repo_name = gr.Textbox(label="Repository Name")
919
+ repo_visibility = gr.Textbox(label="Repository Visibility ('public' or 'private')", value="public")
920
+ upload_button = gr.Button("Upload to HuggingFace")
921
+ upload_button.click(
922
+ fn=upload_hf,
923
+ inputs=[
924
+ lora_rows,
925
+ repo_owner,
926
+ repo_name,
927
+ repo_visibility,
928
+ hf_token,
929
+ ]
930
+ )
931
+ hf_login.click(fn=login_hf, inputs=[hf_token], outputs=[hf_token, hf_login, hf_logout, repo_owner])
932
+ hf_logout.click(fn=logout_hf, outputs=[hf_token, hf_login, hf_logout, repo_owner])
933
+
934
+
935
+ publish_tab.select(refresh_publish_tab, outputs=lora_rows)
936
+ lora_rows.select(fn=set_repo, inputs=[lora_rows], outputs=[repo_name])
937
+
938
+ dataset_folder = gr.State()
939
+
940
+ listeners = [
941
+ lora_name,
942
+ resolution,
943
+ seed,
944
+ workers,
945
+ concept_sentence,
946
+ learning_rate,
947
+ network_dim,
948
+ max_train_epochs,
949
+ save_every_n_epochs,
950
+ timestep_sampling,
951
+ guidance_scale,
952
+ vram,
953
+ num_repeats,
954
+ sample_prompts,
955
+ sample_every_n_steps,
956
+ *advanced_components
957
+ ]
958
+ advanced_component_ids = [x.elem_id for x in advanced_components]
959
+ original_advanced_component_values = [comp.value for comp in advanced_components]
960
+ images.upload(
961
+ load_captioning,
962
+ inputs=[images, concept_sentence],
963
+ outputs=output_components
964
+ )
965
+ images.delete(
966
+ load_captioning,
967
+ inputs=[images, concept_sentence],
968
+ outputs=output_components
969
+ )
970
+ images.clear(
971
+ hide_captioning,
972
+ outputs=[captioning_area, start]
973
+ )
974
+ max_train_epochs.change(
975
+ fn=update_total_steps,
976
+ inputs=[max_train_epochs, num_repeats, images],
977
+ outputs=[total_steps]
978
+ )
979
+ num_repeats.change(
980
+ fn=update_total_steps,
981
+ inputs=[max_train_epochs, num_repeats, images],
982
+ outputs=[total_steps]
983
+ )
984
+ images.upload(
985
+ fn=update_total_steps,
986
+ inputs=[max_train_epochs, num_repeats, images],
987
+ outputs=[total_steps]
988
+ )
989
+ images.delete(
990
+ fn=update_total_steps,
991
+ inputs=[max_train_epochs, num_repeats, images],
992
+ outputs=[total_steps]
993
+ )
994
+ images.clear(
995
+ fn=update_total_steps,
996
+ inputs=[max_train_epochs, num_repeats, images],
997
+ outputs=[total_steps]
998
+ )
999
+ concept_sentence.change(fn=update_sample, inputs=[concept_sentence], outputs=sample_prompts)
1000
+ start.click(fn=create_dataset, inputs=[dataset_folder, resolution, images] + caption_list, outputs=dataset_folder).then(
1001
+ fn=start_training,
1002
+ inputs=[
1003
+ lora_name,
1004
+ train_script,
1005
+ train_config,
1006
+ sample_prompts,
1007
+ ],
1008
+ outputs=terminal,
1009
+ )
1010
+ do_captioning.click(fn=run_captioning, inputs=[images, concept_sentence] + caption_list, outputs=caption_list)
1011
+ demo.load(fn=loaded, js=js, outputs=[hf_token, hf_login, hf_logout, repo_owner])
1012
+ refresh.click(update, inputs=listeners, outputs=[train_script, train_config, dataset_folder])
1013
+ if __name__ == "__main__":
1014
+ cwd = os.path.dirname(os.path.abspath(__file__))
1015
+ demo.launch(debug=True, show_error=True, allowed_paths=[cwd])
flags.png ADDED

Git LFS Details

  • SHA256: 8ee57480d797dc29db699b3c151a40985bb54f8392e30136fab938b679216176
  • Pointer size: 130 Bytes
  • Size of remote file: 46.2 kB
flow.gif ADDED

Git LFS Details

  • SHA256: e502e5bcbfd25f5d7bad10e0b57a88c8f3b24006792d3a273d7bd964634a8fd9
  • Pointer size: 133 Bytes
  • Size of remote file: 11.3 MB
icon.png ADDED

Git LFS Details

  • SHA256: 9b8debf252a184bfffb071ea7a5b11e932cfe2e14aae7632abd186133e51f545
  • Pointer size: 129 Bytes
  • Size of remote file: 6.02 kB
install.js ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module.exports = {
2
+ run: [
3
+ {
4
+ method: "shell.run",
5
+ params: {
6
+ venv: "env",
7
+ message: [
8
+ "git config --global --add safe.directory '*'",
9
+ "git clone -b sd3 https://github.com/kohya-ss/sd-scripts"
10
+ ]
11
+ }
12
+ },
13
+ {
14
+ method: "shell.run",
15
+ params: {
16
+ path: "sd-scripts",
17
+ venv: "../env",
18
+ message: [
19
+ "pip install -r requirements.txt",
20
+ ]
21
+ }
22
+ },
23
+ {
24
+ method: "shell.run",
25
+ params: {
26
+ venv: "env",
27
+ message: [
28
+ "pip uninstall -y diffusers[torch] torch torchaudio torchvision",
29
+ "pip install -r requirements.txt",
30
+ ]
31
+ }
32
+ },
33
+ {
34
+ method: "script.start",
35
+ params: {
36
+ uri: "torch.js",
37
+ params: {
38
+ venv: "env",
39
+ // xformers: true // uncomment this line if your project requires xformers
40
+ }
41
+ }
42
+ },
43
+ {
44
+ method: "fs.link",
45
+ params: {
46
+ drive: {
47
+ vae: "models/vae",
48
+ clip: "models/clip",
49
+ unet: "models/unet",
50
+ loras: "outputs",
51
+ },
52
+ peers: [
53
+ "https://github.com/pinokiofactory/stable-diffusion-webui-forge.git",
54
+ "https://github.com/pinokiofactory/comfy.git",
55
+ "https://github.com/cocktailpeanutlabs/comfyui.git",
56
+ "https://github.com/cocktailpeanutlabs/fooocus.git",
57
+ "https://github.com/cocktailpeanutlabs/automatic1111.git",
58
+ ]
59
+ }
60
+ },
61
+ {
62
+ method: "fs.download",
63
+ params: {
64
+ uri: [
65
+ "https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors?download=true",
66
+ "https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors?download=true",
67
+ ],
68
+ dir: "models/clip"
69
+ }
70
+ },
71
+ {
72
+ method: "fs.download",
73
+ params: {
74
+ uri: [
75
+ "https://huggingface.co/cocktailpeanut/xulf-dev/resolve/main/ae.sft?download=true",
76
+ ],
77
+ dir: "models/vae"
78
+ }
79
+ },
80
+ {
81
+ method: "fs.download",
82
+ params: {
83
+ uri: [
84
+ "https://huggingface.co/cocktailpeanut/xulf-dev/resolve/main/flux1-dev.sft?download=true",
85
+ ],
86
+ dir: "models/unet"
87
+ }
88
+ },
89
+ {
90
+ method: "fs.link",
91
+ params: {
92
+ venv: "env"
93
+ }
94
+ }
95
+ ]
96
+ }
models/.gitkeep ADDED
File without changes
models/clip/.gitkeep ADDED
File without changes
models/unet/.gitkeep ADDED
File without changes
models/vae/.gitkeep ADDED
File without changes
outputs/.gitkeep ADDED
File without changes
pinokio.js ADDED
@@ -0,0 +1,95 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ const path = require('path')
2
+ module.exports = {
3
+ version: "2.1",
4
+ title: "fluxgym",
5
+ description: "[NVIDIA Only] Dead simple web UI for training FLUX LoRA with LOW VRAM support (From 12GB)",
6
+ icon: "icon.png",
7
+ menu: async (kernel, info) => {
8
+ let installed = info.exists("env")
9
+ let running = {
10
+ install: info.running("install.js"),
11
+ start: info.running("start.js"),
12
+ update: info.running("update.js"),
13
+ reset: info.running("reset.js")
14
+ }
15
+ if (running.install) {
16
+ return [{
17
+ default: true,
18
+ icon: "fa-solid fa-plug",
19
+ text: "Installing",
20
+ href: "install.js",
21
+ }]
22
+ } else if (installed) {
23
+ if (running.start) {
24
+ let local = info.local("start.js")
25
+ if (local && local.url) {
26
+ return [{
27
+ default: true,
28
+ icon: "fa-solid fa-rocket",
29
+ text: "Open Web UI",
30
+ href: local.url,
31
+ }, {
32
+ icon: 'fa-solid fa-terminal',
33
+ text: "Terminal",
34
+ href: "start.js",
35
+ }, {
36
+ icon: "fa-solid fa-flask",
37
+ text: "Outputs",
38
+ href: "outputs?fs"
39
+ }]
40
+ } else {
41
+ return [{
42
+ default: true,
43
+ icon: 'fa-solid fa-terminal',
44
+ text: "Terminal",
45
+ href: "start.js",
46
+ }]
47
+ }
48
+ } else if (running.update) {
49
+ return [{
50
+ default: true,
51
+ icon: 'fa-solid fa-terminal',
52
+ text: "Updating",
53
+ href: "update.js",
54
+ }]
55
+ } else if (running.reset) {
56
+ return [{
57
+ default: true,
58
+ icon: 'fa-solid fa-terminal',
59
+ text: "Resetting",
60
+ href: "reset.js",
61
+ }]
62
+ } else {
63
+ return [{
64
+ default: true,
65
+ icon: "fa-solid fa-power-off",
66
+ text: "Start",
67
+ href: "start.js",
68
+ }, {
69
+ icon: "fa-solid fa-flask",
70
+ text: "Outputs",
71
+ href: "sd-scripts/fluxgym/outputs?fs"
72
+ }, {
73
+ icon: "fa-solid fa-plug",
74
+ text: "Update",
75
+ href: "update.js",
76
+ }, {
77
+ icon: "fa-solid fa-plug",
78
+ text: "Install",
79
+ href: "install.js",
80
+ }, {
81
+ icon: "fa-regular fa-circle-xmark",
82
+ text: "Reset",
83
+ href: "reset.js",
84
+ }]
85
+ }
86
+ } else {
87
+ return [{
88
+ default: true,
89
+ icon: "fa-solid fa-plug",
90
+ text: "Install",
91
+ href: "install.js",
92
+ }]
93
+ }
94
+ }
95
+ }
pinokio_meta.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "posts": [
3
+ "https://x.com/cocktailpeanut/status/1835719701172756592",
4
+ "https://x.com/LikeToasters/status/1834258975384092858",
5
+ "https://x.com/cocktailpeanut/status/1834245329627009295",
6
+ "https://x.com/jkch0205/status/1834003420132614450",
7
+ "https://x.com/huwhitememes/status/1834074992209699132",
8
+ "https://x.com/GorillaRogueGam/status/1834148656791888139",
9
+ "https://x.com/cocktailpeanut/status/1833964839519068303",
10
+ "https://x.com/cocktailpeanut/status/1833935061907079521",
11
+ "https://x.com/cocktailpeanut/status/1833940728881242135",
12
+ "https://x.com/cocktailpeanut/status/1833881392482066638",
13
+ "https://x.com/Alone1Moon/status/1833348850662445369",
14
+ "https://x.com/_f_ai_9/status/1833485349995397167",
15
+ "https://x.com/intocryptoast/status/1833061082862412186",
16
+ "https://x.com/cocktailpeanut/status/1833888423716827321",
17
+ "https://x.com/cocktailpeanut/status/1833884852992516596",
18
+ "https://x.com/cocktailpeanut/status/1833885335077417046",
19
+ "https://x.com/NiwonArt/status/1833565746624139650",
20
+ "https://x.com/cocktailpeanut/status/1833884361986380117",
21
+ "https://x.com/NiwonArt/status/1833599399764889685",
22
+ "https://x.com/LikeToasters/status/1832934391217045913",
23
+ "https://x.com/cocktailpeanut/status/1832924887456817415",
24
+ "https://x.com/cocktailpeanut/status/1832927154536902897",
25
+ "https://x.com/YabaiHamster/status/1832697724690386992",
26
+ "https://x.com/cocktailpeanut/status/1832747889497366706",
27
+ "https://x.com/PhotogenicWeekE/status/1832720544959185202",
28
+ "https://x.com/zuzaritt/status/1832748542164652390",
29
+ "https://x.com/foxyy4i/status/1832764883710185880",
30
+ "https://x.com/waynedahlberg/status/1832226132999213095",
31
+ "https://x.com/PhotoGarrido/status/1832214644515041770",
32
+ "https://x.com/cocktailpeanut/status/1832787205774786710",
33
+ "https://x.com/cocktailpeanut/status/1832151307198541961",
34
+ "https://x.com/cocktailpeanut/status/1832145996014612735",
35
+ "https://x.com/cocktailpeanut/status/1832084951115972653",
36
+ "https://x.com/cocktailpeanut/status/1832091112086843684"
37
+ ]
38
+ }
publish_to_hf.png ADDED

Git LFS Details

  • SHA256: cac2aa25db8911b38ed7e084bbbafb226252e26935dbb107ee66b8cc626a95e6
  • Pointer size: 131 Bytes
  • Size of remote file: 418 kB
requirements.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ safetensors
2
+ git+https://github.com/huggingface/diffusers.git
3
+ gradio_logsview@https://huggingface.co/spaces/cocktailpeanut/gradio_logsview/resolve/main/gradio_logsview-0.0.17-py3-none-any.whl
4
+ transformers
5
+ lycoris-lora==1.8.3
6
+ flatten_json
7
+ pyyaml
8
+ oyaml
9
+ tensorboard
10
+ kornia
11
+ invisible-watermark
12
+ einops
13
+ accelerate
14
+ toml
15
+ albumentations
16
+ pydantic
17
+ omegaconf
18
+ k-diffusion
19
+ open_clip_torch
20
+ timm
21
+ prodigyopt
22
+ controlnet_aux==0.0.7
23
+ python-dotenv
24
+ bitsandbytes
25
+ hf_transfer
26
+ lpips
27
+ pytorch_fid
28
+ optimum-quanto
29
+ sentencepiece
30
+ huggingface_hub
31
+ peft
32
+ gradio
33
+ python-slugify
34
+ imagesize
reset.js ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module.exports = {
2
+ run: [{
3
+ method: "fs.rm",
4
+ params: {
5
+ path: "sd-scripts"
6
+ }
7
+ }, {
8
+ method: "fs.rm",
9
+ params: {
10
+ path: "env"
11
+ }
12
+ }]
13
+ }
sample.png ADDED

Git LFS Details

  • SHA256: 7a1670e3ce2a35d0cffec798ea04f4216b7d4d766e1e785ef23e94f6d2d22ff1
  • Pointer size: 132 Bytes
  • Size of remote file: 1.29 MB
sample_fields.png ADDED

Git LFS Details

  • SHA256: 01f815f6c04ece9692d97b8a1a06505cfc0dde70d07dbf26f0353779f04b5bd4
  • Pointer size: 130 Bytes
  • Size of remote file: 86.3 kB
screenshot.png ADDED

Git LFS Details

  • SHA256: cde964e4a233bf3ad7219ac91058f805ae0cbc0f853b7f6aa552af5b6f8c5c8a
  • Pointer size: 131 Bytes
  • Size of remote file: 243 kB
seed.gif ADDED

Git LFS Details

  • SHA256: 271dbf11ef0c709558bb570c4c2b7765001356eefcbcc9cf0f0713262a91937f
  • Pointer size: 132 Bytes
  • Size of remote file: 3.62 MB
start.js ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module.exports = {
2
+ daemon: true,
3
+ run: [
4
+ {
5
+ method: "shell.run",
6
+ params: {
7
+ venv: "env", // Edit this to customize the venv folder path
8
+ env: { }, // Edit this to customize environment variables (see documentation)
9
+ message: [
10
+ "python app.py", // Edit with your custom commands
11
+ ],
12
+ on: [{
13
+ // The regular expression pattern to monitor.
14
+ // When this pattern occurs in the shell terminal, the shell will return,
15
+ // and the script will go onto the next step.
16
+ "event": "/http:\/\/\\S+/",
17
+
18
+ // "done": true will move to the next step while keeping the shell alive.
19
+ // "kill": true will move to the next step after killing the shell.
20
+ "done": true
21
+ }]
22
+ }
23
+ },
24
+ {
25
+ // This step sets the local variable 'url'.
26
+ // This local variable will be used in pinokio.js to display the "Open WebUI" tab when the value is set.
27
+ method: "local.set",
28
+ params: {
29
+ // the input.event is the regular expression match object from the previous step
30
+ url: "{{input.event[0]}}"
31
+ }
32
+ }
33
+ ]
34
+ }
torch.js ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module.exports = {
2
+ run: [
3
+ // windows nvidia
4
+ {
5
+ "when": "{{platform === 'win32' && gpu === 'nvidia'}}",
6
+ "method": "shell.run",
7
+ "params": {
8
+ "venv": "{{args && args.venv ? args.venv : null}}",
9
+ "path": "{{args && args.path ? args.path : '.'}}",
10
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121"
11
+
12
+ }
13
+ },
14
+ // windows amd
15
+ {
16
+ "when": "{{platform === 'win32' && gpu === 'amd'}}",
17
+ "method": "shell.run",
18
+ "params": {
19
+ "venv": "{{args && args.venv ? args.venv : null}}",
20
+ "path": "{{args && args.path ? args.path : '.'}}",
21
+ "message": "pip install torch-directml torchaudio torchvision"
22
+ }
23
+ },
24
+ // windows cpu
25
+ {
26
+ "when": "{{platform === 'win32' && (gpu !== 'nvidia' && gpu !== 'amd')}}",
27
+ "method": "shell.run",
28
+ "params": {
29
+ "venv": "{{args && args.venv ? args.venv : null}}",
30
+ "path": "{{args && args.path ? args.path : '.'}}",
31
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu"
32
+ }
33
+ },
34
+ // mac
35
+ {
36
+ "when": "{{platform === 'darwin'}}",
37
+ "method": "shell.run",
38
+ "params": {
39
+ "venv": "{{args && args.venv ? args.venv : null}}",
40
+ "path": "{{args && args.path ? args.path : '.'}}",
41
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu"
42
+ }
43
+ },
44
+ // linux nvidia
45
+ {
46
+ "when": "{{platform === 'linux' && gpu === 'nvidia'}}",
47
+ "method": "shell.run",
48
+ "params": {
49
+ "venv": "{{args && args.venv ? args.venv : null}}",
50
+ "path": "{{args && args.path ? args.path : '.'}}",
51
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121"
52
+ }
53
+ },
54
+ // linux rocm (amd)
55
+ {
56
+ "when": "{{platform === 'linux' && gpu === 'amd'}}",
57
+ "method": "shell.run",
58
+ "params": {
59
+ "venv": "{{args && args.venv ? args.venv : null}}",
60
+ "path": "{{args && args.path ? args.path : '.'}}",
61
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1"
62
+ }
63
+ },
64
+ // linux cpu
65
+ {
66
+ "when": "{{platform === 'linux' && (gpu !== 'amd' && gpu !=='nvidia')}}",
67
+ "method": "shell.run",
68
+ "params": {
69
+ "venv": "{{args && args.venv ? args.venv : null}}",
70
+ "path": "{{args && args.path ? args.path : '.'}}",
71
+ "message": "pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu"
72
+ }
73
+ }
74
+ ]
75
+ }
update.js ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ module.exports = {
2
+ run: [{
3
+ method: "shell.run",
4
+ params: {
5
+ message: "git pull"
6
+ }
7
+ }, {
8
+ method: "shell.run",
9
+ params: {
10
+ path: "sd-scripts",
11
+ message: "git pull"
12
+ }
13
+ }, {
14
+ method: "shell.run",
15
+ params: {
16
+ path: "sd-scripts",
17
+ venv: "../env",
18
+ message: [
19
+ "pip install -r requirements.txt",
20
+ ]
21
+ }
22
+ }, {
23
+ method: "shell.run",
24
+ params: {
25
+ venv: "env",
26
+ message: [
27
+ "pip uninstall -y diffusers[torch] torch torchaudio torchvision",
28
+ "pip install -r requirements.txt",
29
+ ]
30
+ }
31
+ }, {
32
+ method: "script.start",
33
+ params: {
34
+ uri: "torch.js",
35
+ params: {
36
+ venv: "env",
37
+ // xformers: true // uncomment this line if your project requires xformers
38
+ }
39
+ }
40
+ }, {
41
+ method: "fs.link",
42
+ params: {
43
+ venv: "env"
44
+ }
45
+ }]
46
+ }