hollowstrawberry commited on
Commit
aa52ebb
1 Parent(s): ed9e8d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -15
README.md CHANGED
@@ -96,7 +96,7 @@ To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM a
96
  Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
97
  The top of your page should look similar to this:
98
 
99
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/top.png"/>
100
 
101
  Here you can select your model and VAE. We will go over what these are and how you can get more of them. The collab has additional settings here too, you should ignore them for now.
102
 
@@ -158,7 +158,8 @@ Here you can select your model and VAE. We will go over what these are and how y
158
  1. **Generation parameters** <a name="gen"></a>[▲](#index)
159
 
160
  The rest of the parameters in the starting page will look something like this:
161
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/parameters.png"/>
 
162
 
163
  * **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
164
  * **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
@@ -182,7 +183,7 @@ Here you can select your model and VAE. We will go over what these are and how y
182
  # Extensions <a name="extensions"></a>[▲](#index)
183
 
184
  *Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
185
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/extensions.png"/>
186
 
187
  Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2 if you're running locally:
188
  * [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
@@ -203,7 +204,7 @@ Loras can represent a character, an artstyle, poses, clothes, or even a human fa
203
 
204
  Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts ▲](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
205
 
206
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/extranetworks.png"/>
207
 
208
  An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
209
 
@@ -233,7 +234,7 @@ I will demonstrate how ControlNet may be used. For this I chose a popular image
233
 
234
  First, you must scroll down in the txt2img page and click on ControlNet to open the menu. Then, click *Enable*, and pick a matching *preprocessor* and *model*. To start with, I chose Canny for both. Finally I upload my sample image. Make sure not to click over the uploaded image or it will start drawing. We can ignore the other settings.
235
 
236
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/controlnet.png"/>
237
 
238
  * **Canny**
239
 
@@ -241,9 +242,9 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
241
 
242
  <details>
243
  <summary>Canny example, click to open</summary>
244
- <br>
245
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/canny1.png"/>
246
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/canny2.png"/>
247
  </details>
248
 
249
  * **Depth**
@@ -252,9 +253,9 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
252
 
253
  <details>
254
  <summary>Depth example, click to open</summary>
255
- <br>
256
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/depth1.png"/>
257
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/depth2.png"/>
258
  </details>
259
 
260
  * **Openpose**
@@ -263,9 +264,9 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
263
 
264
  <details>
265
  <summary>Openpose example, click to open</summary>
266
- <br>
267
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose1.png"/>
268
- <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose2.png"/>
269
  </details>
270
 
271
  You will notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
@@ -278,8 +279,10 @@ You can also use ControlNet in img2img, in which the input image and sample imag
278
  I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
279
  There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
280
 
 
 
281
  # Lora Training <a name="train"></a>[▲](#index)
282
 
283
- * **Tips for training character Loras** <a name="trainchars"></a>[▲](#index)
284
 
285
  Coming soon.
 
96
  Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
97
  The top of your page should look similar to this:
98
 
99
+ ![Top](images/top.png)
100
 
101
  Here you can select your model and VAE. We will go over what these are and how you can get more of them. The collab has additional settings here too, you should ignore them for now.
102
 
 
158
  1. **Generation parameters** <a name="gen"></a>[▲](#index)
159
 
160
  The rest of the parameters in the starting page will look something like this:
161
+
162
+ ![Parameters](images/parameters.png)
163
 
164
  * **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
165
  * **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
 
183
  # Extensions <a name="extensions"></a>[▲](#index)
184
 
185
  *Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
186
+ ![Extensions](images/extensions.png)
187
 
188
  Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2 if you're running locally:
189
  * [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
 
204
 
205
  Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts ▲](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
206
 
207
+ ![Extra Networks](images/extranetworks.png)
208
 
209
  An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
210
 
 
234
 
235
  First, you must scroll down in the txt2img page and click on ControlNet to open the menu. Then, click *Enable*, and pick a matching *preprocessor* and *model*. To start with, I chose Canny for both. Finally I upload my sample image. Make sure not to click over the uploaded image or it will start drawing. We can ignore the other settings.
236
 
237
+ ![Control Net](images/controlnet.png)
238
 
239
  * **Canny**
240
 
 
242
 
243
  <details>
244
  <summary>Canny example, click to open</summary>
245
+
246
+ ![Canny preprocessed image](images/canny1.png)
247
+ ![Canny output image](images/canny2.png)
248
  </details>
249
 
250
  * **Depth**
 
253
 
254
  <details>
255
  <summary>Depth example, click to open</summary>
256
+
257
+ ![Depth preprocessed image](images/depth1.png)
258
+ ![Depth output image](images/depth2.png)
259
  </details>
260
 
261
  * **Openpose**
 
264
 
265
  <details>
266
  <summary>Openpose example, click to open</summary>
267
+
268
+ ![Open Pose preprocessed image](images/openpose1.png)
269
+ ![Open Pose output image](images/openpose2.png)
270
  </details>
271
 
272
  You will notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
 
279
  I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
280
  There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
281
 
282
+ &nbsp;
283
+
284
  # Lora Training <a name="train"></a>[▲](#index)
285
 
286
+ * **Character Loras** <a name="trainchars"></a>[▲](#index)
287
 
288
  Coming soon.