hollowstrawberry commited on
Commit
4aa33dc
β€’
1 Parent(s): 84439c1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -7
README.md CHANGED
@@ -86,7 +86,11 @@ To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM a
86
 
87
  # Getting Started <a name="start"></a>[β–²](#index)
88
 
89
- Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
 
 
 
 
90
 
91
  1. **Models** <a name="model"></a>[β–²](#index)
92
 
@@ -100,14 +104,14 @@ Before or after generating your first few images, you will want to take a look a
100
  *Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
101
 
102
  *Collab:* Copy the **direct download link to the file** and paste it in the text box labeled `custom_urls`. Multiple links are separated by commas.
103
-
104
  Please note that checkpoints in the format `.safetensors` are safe to use while `.ckpt` **may** contain viruses. Be careful.
105
 
106
  1. **VAEs** <a name="vae"></a>[β–²](#index)
107
 
108
  Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
109
 
110
- If you're using the collab, you should already have the below VAEs, which you can select at the top of the page, next to your models.
111
 
112
  There are practically only 3 different VAEs out there worth talking about:
113
  * [anything vae](https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt), also known as the orangemix vae. All anime models use this.
@@ -133,6 +137,8 @@ Before or after generating your first few images, you will want to take a look a
133
  * **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
134
  * [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt). For collab, paste the link into the `custom_urls` text box. For Windows, put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
135
 
 
 
136
  After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
137
  You can also save your prompts to reuse later with the buttons below Generate. Click the small πŸ’Ύ *Save style* button and give it a name. Later, you can open your *Styles* dropdown to choose, then click πŸ“‹ *Apply selected styles to the current prompt*.
138
 
@@ -140,7 +146,8 @@ Before or after generating your first few images, you will want to take a look a
140
 
141
  1. **Generation parameters** <a name="gen"></a>[β–²](#index)
142
 
143
- At the top of the page you'll be able to choose your checkpoint and VAE, and we've already covered the prompt. Here are the rest of the options:
 
144
 
145
  * **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
146
  * **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
@@ -163,9 +170,10 @@ Before or after generating your first few images, you will want to take a look a
163
 
164
  # Extensions <a name="extensions"></a>[β–²](#index)
165
 
166
- *Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
 
167
 
168
- Here are some useful extensions, most of these come installed in the collab, and I hugely recommend the first 2 if you're running locally:
169
  * [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
170
  * [TagComplete](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete) - Absolutely essential for anime art. It will show you the matching booru tags as you type. Anime models work via booru tags, and rarely work at all if you go outside them, so knowing them is godmode. Not all tags will work well in all models though, specially if they're rare.
171
  * [ControlNet](https://github.com/Mikubill/sd-webui-controlnet) - A huge extension deserving of its own guide (coming soon). It lets you take AI data from any image and use it as an input for your image. Practically speaking, it can create any pose or environment you want. Very powerful if used with external tools succh as Blender.
@@ -184,6 +192,8 @@ Loras can represent a character, an artstyle, poses, clothes, or even a human fa
184
 
185
  Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts β–²](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
186
 
 
 
187
  An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
188
 
189
  &nbsp;
@@ -206,7 +216,8 @@ Coming soon: How to use ultimate upscaler.
206
 
207
  # Lora Training <a name="train"></a>[β–²](#index)
208
 
209
- * **Tips for training character Loras** <a name="trainchars"></a>[β–²](#index)
 
210
 
211
 
212
 
 
86
 
87
  # Getting Started <a name="start"></a>[β–²](#index)
88
 
89
+ Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
90
+ The top of your page should look something like this:
91
+ <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/top.png"/>
92
+ Here you can select your model and VAE. We will go over what these are and how you can get more of them.
93
+
94
 
95
  1. **Models** <a name="model"></a>[β–²](#index)
96
 
 
104
  *Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
105
 
106
  *Collab:* Copy the **direct download link to the file** and paste it in the text box labeled `custom_urls`. Multiple links are separated by commas.
107
+
108
  Please note that checkpoints in the format `.safetensors` are safe to use while `.ckpt` **may** contain viruses. Be careful.
109
 
110
  1. **VAEs** <a name="vae"></a>[β–²](#index)
111
 
112
  Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
113
 
114
+ If you're using the collab, you should already have the below VAEs, as I told you to select them before running.
115
 
116
  There are practically only 3 different VAEs out there worth talking about:
117
  * [anything vae](https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt), also known as the orangemix vae. All anime models use this.
 
137
  * **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
138
  * [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt). For collab, paste the link into the `custom_urls` text box. For Windows, put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
139
 
140
+ <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/prompt.png"/>
141
+
142
  After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
143
  You can also save your prompts to reuse later with the buttons below Generate. Click the small πŸ’Ύ *Save style* button and give it a name. Later, you can open your *Styles* dropdown to choose, then click πŸ“‹ *Apply selected styles to the current prompt*.
144
 
 
146
 
147
  1. **Generation parameters** <a name="gen"></a>[β–²](#index)
148
 
149
+ The rest of the parameters in the starting page will look something like this:
150
+ <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/parameters.png"/>
151
 
152
  * **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
153
  * **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
 
170
 
171
  # Extensions <a name="extensions"></a>[β–²](#index)
172
 
173
+ *Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
174
+ <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/extensions.png"/>
175
 
176
+ Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2 if you're running locally:
177
  * [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
178
  * [TagComplete](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete) - Absolutely essential for anime art. It will show you the matching booru tags as you type. Anime models work via booru tags, and rarely work at all if you go outside them, so knowing them is godmode. Not all tags will work well in all models though, specially if they're rare.
179
  * [ControlNet](https://github.com/Mikubill/sd-webui-controlnet) - A huge extension deserving of its own guide (coming soon). It lets you take AI data from any image and use it as an input for your image. Practically speaking, it can create any pose or environment you want. Very powerful if used with external tools succh as Blender.
 
192
 
193
  Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts β–²](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
194
 
195
+ <img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/extranetworks.png"/>
196
+
197
  An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
198
 
199
  &nbsp;
 
216
 
217
  # Lora Training <a name="train"></a>[β–²](#index)
218
 
219
+ * **Tips for training character Loras** <a name="trainchars"></a>[β–²](#index)
220
+
221
 
222
 
223