hollowstrawberry
commited on
Commit
•
8cc6c9b
1
Parent(s):
7831d53
Update README.md
Browse files
README.md
CHANGED
@@ -20,10 +20,10 @@ language:
|
|
20 |
* [Google Collab](#collab)
|
21 |
* [Local Installation (Windows + Nvidia)](#install)
|
22 |
* [Getting Started](#start)
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
* [Extensions](#extensions)
|
28 |
* [Loras](#lora)
|
29 |
* [Upscalers](#upscale)
|
@@ -71,23 +71,23 @@ To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM a
|
|
71 |
|
72 |
1. Get the latest release from [this page](https://github.com/EmpireMediaScience/A1111-Web-UI-Installer/releases).
|
73 |
|
74 |
-
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
79 |
* If your graphics card has less than 8 GB of VRAM, add `--opt-split-attention-v1` as it may lower vram usage even further.
|
80 |
* If you want to run the program from your computer but want to use it in another device, such as your phone, add `--listen`. After launching, use your computer's local IP in the same WiFi network to access the interface.
|
81 |
* Full list of possible parameters [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
|
82 |
|
83 |
-
|
84 |
|
85 |
-
|
86 |
* In the *Stable Diffusion* section, scroll down and increase **Clip Skip** from 1 to 2. This is said to produce better images, specially for anime.
|
87 |
* In the *User Interface* section, scroll down to **Quicksettings list** and change it to `sd_model_checkpoint, sd_vae`
|
88 |
* Scroll back up, click the big orange **Apply settings** button, then **Reload UI** next to it.
|
89 |
|
90 |
-
|
91 |
|
92 |
|
93 |
|
@@ -103,79 +103,79 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
103 |
|
104 |
1. **Models** <a name="model"></a>[▲](#index)
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
-
|
112 |
|
113 |
-
|
114 |
|
115 |
-
|
116 |
|
117 |
-
|
118 |
|
119 |
1. **VAEs** <a name="vae"></a>[▲](#index)
|
120 |
|
121 |
-
|
122 |
|
123 |
-
|
124 |
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
129 |
|
130 |
-
|
131 |
|
132 |
1. **Prompts** <a name="prompt"></a>[▲](#index)
|
133 |
|
134 |
-
|
135 |
-
|
136 |
-
|
137 |
|
138 |
-
|
139 |
-
|
140 |
-
|
141 |
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
|
146 |
-
|
147 |
-
|
148 |
|
149 |
-
|
150 |
|
151 |
-
|
152 |
-
|
153 |
|
154 |
-
|
155 |
|
156 |
-
|
157 |
|
158 |
1. **Generation parameters** <a name="gen"></a>[▲](#index)
|
159 |
|
160 |
-
|
161 |
-
|
162 |
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
|
180 |
|
181 |
|
@@ -236,6 +236,7 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
236 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/controlnet.png"/>
|
237 |
|
238 |
* **Canny**
|
|
|
239 |
The Canny method extracts the hard edges of the sample image. It is useful for many different types of images, specially where you want to preserve small details and the general look of an image. Observe:
|
240 |
|
241 |
<details>
|
@@ -246,6 +247,7 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
246 |
</details>
|
247 |
|
248 |
* **Depth**
|
|
|
249 |
The Depth method extracts the 3D elements of the sample image. It is best suited for complex environments and general composition. Observe:
|
250 |
|
251 |
<details>
|
@@ -256,6 +258,7 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
256 |
</details>
|
257 |
|
258 |
* **Openpose**
|
|
|
259 |
The Openpose method extracts the human poses of the sample image. It helps tremendously to get the desired shot and composition of your generated characters. Observe:
|
260 |
|
261 |
<details>
|
@@ -265,12 +268,11 @@ First, you must scroll down in the txt2img page and click on ControlNet to open
|
|
265 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose2.png"/>
|
266 |
</details>
|
267 |
|
268 |
-
You may notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you
|
269 |
|
270 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good example is depth+openpose, to get a specific character pose in a specific environment, or even a specific pose with specific hand gestures.
|
271 |
|
272 |
-
I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
|
273 |
-
|
274 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
275 |
|
276 |
# Lora Training <a name="train"></a>o[▲](#index)
|
|
|
20 |
* [Google Collab](#collab)
|
21 |
* [Local Installation (Windows + Nvidia)](#install)
|
22 |
* [Getting Started](#start)
|
23 |
+
1. [Models](#model)
|
24 |
+
1. [VAEs](#vae)
|
25 |
+
1. [Prompts](#prompt)
|
26 |
+
1. [Generation parameters](#gen)
|
27 |
* [Extensions](#extensions)
|
28 |
* [Loras](#lora)
|
29 |
* [Upscalers](#upscale)
|
|
|
71 |
|
72 |
1. Get the latest release from [this page](https://github.com/EmpireMediaScience/A1111-Web-UI-Installer/releases).
|
73 |
|
74 |
+
1. Run the installer, choose a simple location to install to, and wait for it to finish.
|
75 |
|
76 |
+
1. Run the program. You will see a few options. First, turn on **medvram** and **xformers**. You may skip medvram if you have 12 or more GB of VRAM.
|
77 |
|
78 |
+
1. Set your *Additional Launch Options* to: `--opt-channelslast --no-half-vae` . Any extra options should be separated by spaces.
|
79 |
* If your graphics card has less than 8 GB of VRAM, add `--opt-split-attention-v1` as it may lower vram usage even further.
|
80 |
* If you want to run the program from your computer but want to use it in another device, such as your phone, add `--listen`. After launching, use your computer's local IP in the same WiFi network to access the interface.
|
81 |
* Full list of possible parameters [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
|
82 |
|
83 |
+
1. Click **Launch** and wait for a browser window to open with the interface. It may take a while the first time.
|
84 |
|
85 |
+
1. The page is now open. It's your own private website. The starting page is where you can make your images. But first, we'll go to the **Settings** tab. There will be sections of settings on the left.
|
86 |
* In the *Stable Diffusion* section, scroll down and increase **Clip Skip** from 1 to 2. This is said to produce better images, specially for anime.
|
87 |
* In the *User Interface* section, scroll down to **Quicksettings list** and change it to `sd_model_checkpoint, sd_vae`
|
88 |
* Scroll back up, click the big orange **Apply settings** button, then **Reload UI** next to it.
|
89 |
|
90 |
+
1. You are more than ready to generate some images, but you only have the basic model available. It's not great, at most it can make some paintings. Also, what are all of these options? See [below ▼](#start) to get started.
|
91 |
|
92 |
|
93 |
|
|
|
103 |
|
104 |
1. **Models** <a name="model"></a>[▲](#index)
|
105 |
|
106 |
+
The **model**, also called **checkpoint**, is the brain of your AI, designed for the purpose of producing certain types of images. There are many options, most of which are on [civitai](https://civitai.com). But which to choose? These are my recommendations:
|
107 |
+
* For anime, [7th Heaven Mix](https://civitai.com/models/4669/corneos-7th-heaven-mix) has a nice aesthetic similar to anime movies, while [Abyss Orange Mix 3](https://civitai.com/models/9942/abyssorangemix3-aom3) *(__Note:__ scroll down and choose the AOM3 option)* offers more realism in the form of advanced lighting and softer shading, as well as more lewdness. I remixed the two options above into [Heaven Orange Mix](https://civitai.com/models/14305/heavenorangemix).
|
108 |
+
* While AOM3 is extremely capable for NSFW, the popular [Grapefruit](https://civitai.com/models/2583/grapefruit-hentai-model) hentai model may also fit your needs.
|
109 |
+
* For general art go with [DreamShaper](https://civitai.com/models/4384/dreamshaper), there are few options quite like it in terms of raw creativity. An honorable mention goes to [Pastel Mix](https://civitai.com/models/5414/pastel-mix-stylized-anime-model), which has a beautiful and unique aesthetic with the addition of anime.
|
110 |
+
* For photorealism go with [Deliberate](https://civitai.com/models/4823/deliberate). It can do almost anything, but specially photographs. Very intricate results.
|
111 |
+
* The [Uber Realistic Porn Merge](https://civitai.com/models/2661/uber-realistic-porn-merge-urpm) is self-explanatory.
|
112 |
|
113 |
+
*Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
|
114 |
|
115 |
+
*Collab:* Copy the **direct download link to the file** and paste it in the text box labeled `custom_urls`. Multiple links are separated by commas.
|
116 |
|
117 |
+
Please note that checkpoints in the format `.safetensors` are safe to use while `.ckpt` **may** contain viruses, so be careful. Additionally, when choosing models you may have a choice between fp32, fp16 and pruned. They all produce the same images within a tiny margin of error, so just go with the smallest file (fp16-pruned). If you want to use them for training or merging, go with the biggest one instead.
|
118 |
|
119 |
1. **VAEs** <a name="vae"></a>[▲](#index)
|
120 |
|
121 |
+
Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
|
122 |
|
123 |
+
If you're using the collab in this guide, you should already have the below VAEs, as I told you to select them before running.
|
124 |
|
125 |
+
There are practically only 3 different VAEs out there worth talking about:
|
126 |
+
* [anything vae](https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt), also known as the orangemix vae. All anime models use this.
|
127 |
+
* [vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse-original/blob/main/vae-ft-mse-840000-ema-pruned.safetensors), the latest from Stable Diffusion itself. Used by photorealism models and such.
|
128 |
+
* [kl-f8-anime2](https://huggingface.co/hakurei/waifu-diffusion-v1-4/resolve/main/vae/kl-f8-anime2.ckpt), also known as the Waifu Diffusion VAE, it is older and produces more saturated results. Used by Pastel Mix.
|
129 |
|
130 |
+
*Launcher:* It lets you choose the default VAE, otherwise put them in the `stable-diffusion-webui/models/VAE` folder.
|
131 |
|
132 |
1. **Prompts** <a name="prompt"></a>[▲](#index)
|
133 |
|
134 |
+
On the first tab, **txt2img**, you'll be making most of your images. This is where you'll find your *prompt* and *negative prompt*.
|
135 |
+
Stable Diffusion is not like Midjourney or other popular image generation software, you can't just ask it what you want and get a good image. You have to be specific. *Very* specific.
|
136 |
+
I will show you an example of a prompt and negative prompt:
|
137 |
|
138 |
+
* Anime
|
139 |
+
* `2d, masterpiece, best quality, anime, highly detailed face, highly detailed eyes, highly detailed background, perfect lighting`
|
140 |
+
* `EasyNegative, worst quality, low quality, 3d, realistic, photorealistic, (loli, child, teen, baby face), zombie, animal, multiple views, text, watermark, signature, artist name, artist logo, censored`
|
141 |
|
142 |
+
* Photorealism
|
143 |
+
* `best quality, 4k, 8k, ultra highres, (realistic, photorealistic, RAW photo:1.4), (hdr, sharp focus:1.2), intricate texture, skin imperfections`
|
144 |
+
* `EasyNegative, worst quality, low quality, normal quality, child, painting, drawing, sketch, cartoon, anime, render, 3d, blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art`
|
145 |
|
146 |
+
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
147 |
+
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt). For collab, paste the link into the `custom_urls` text box. For Windows, put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
|
148 |
|
149 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/prompt.png"/>
|
150 |
|
151 |
+
After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
|
152 |
+
You can also save your prompts to reuse later with the buttons below Generate. Click the small 💾 *Save style* button and give it a name. Later, you can open your *Styles* dropdown to choose, then click 📋 *Apply selected styles to the current prompt*.
|
153 |
|
154 |
+
Note that when you surround something in `(parentheses)`, it will have more emphasis or **weight** in your resulting image, equal to `1.1`. The normal weight is 1, and each parentheses will multiply by an additional 1.1. You can also specify the weight yourself, like this: `(full body:1.4)`. You can also go below 1 to de-emphasize a word: `[brackets]` will multiply by 0.9, but you must still use normal parentheses to go lower, like `(this:0.5)`.
|
155 |
|
156 |
+
Also note that hands and feet are famously difficult for AI to generate. These methods improve your chances, but you may need to do img2img inpainting, photoshopping, or advanced techniques with [ControlNet ▼](#controlnet) to get it right.
|
157 |
|
158 |
1. **Generation parameters** <a name="gen"></a>[▲](#index)
|
159 |
|
160 |
+
The rest of the parameters in the starting page will look something like this:
|
161 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/parameters.png"/>
|
162 |
|
163 |
+
* **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
|
164 |
+
* **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
|
165 |
+
* **Width and Height:** 512x512 is the default, and you should almost never go above 768 in either direction as it may distort and deform your image. To produce bigger images see `Hires. fix`
|
166 |
+
* **Batch Count and Batch Size:** Batch *size* is how many images your graphics card will create at the same time, which is limited by your graphics card. Batch count is how many repeats of those to produce. Batches have sequential seeds, more on seeds below.
|
167 |
+
* **CFG Scale:** "Lower values produce more creative results". You should almost always stick to 7, but 4 to 10 is an acceptable range. It gets strange outside that.
|
168 |
+
* **Seed:** A number that guides the creation of your image. The same seed with the same prompt and parameters produces almost exacly the same image every time.
|
169 |
|
170 |
+
**Hires. fix:** Lets you create larger images without distortion. Often used at 2x scale. When selected, more options appear:
|
171 |
+
* **Upscaler:** The algorithm to upscale with. `Latent` and its variations produce creative results, and you may also like `R-ESRGAN 4x+` and its anime version. I recommend the Remacri upscaler, see [Upscalers ▼](#upscale).
|
172 |
+
* **Hires steps:** I recommend at least half as many as your sampling steps. Higher values aren't always better, and they take a long time, so be conservative here.
|
173 |
+
* **Denoising strength:** The most important parameter. Near 0.0, no detail will be added to the image. Near 1.0, the image will be changed completely. I recommend something between 0.2 and 0.6 depending on the image, to add enough detail as the image gets larger, without *destroying* any original details you like.
|
174 |
|
175 |
+
Others:
|
176 |
+
* **Restore faces:** May improve realistic faces. I never need it with the models and prompts listed in this guide as well as hires fix.
|
177 |
+
* **Tiling:** Used to produce repeating textures to put on a grid. Not very useful.
|
178 |
+
* **Script:** Lets you access useful features and extensions, such as `X/Y/Z Plot` which lets you compare images with varying parameters on a grid. Very powerful.
|
179 |
|
180 |
|
181 |
|
|
|
236 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/controlnet.png"/>
|
237 |
|
238 |
* **Canny**
|
239 |
+
|
240 |
The Canny method extracts the hard edges of the sample image. It is useful for many different types of images, specially where you want to preserve small details and the general look of an image. Observe:
|
241 |
|
242 |
<details>
|
|
|
247 |
</details>
|
248 |
|
249 |
* **Depth**
|
250 |
+
|
251 |
The Depth method extracts the 3D elements of the sample image. It is best suited for complex environments and general composition. Observe:
|
252 |
|
253 |
<details>
|
|
|
258 |
</details>
|
259 |
|
260 |
* **Openpose**
|
261 |
+
|
262 |
The Openpose method extracts the human poses of the sample image. It helps tremendously to get the desired shot and composition of your generated characters. Observe:
|
263 |
|
264 |
<details>
|
|
|
268 |
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/openpose2.png"/>
|
269 |
</details>
|
270 |
|
271 |
+
You may notice that there are 2 results for each method. The first is an intermediate step called the *preprocessed image*, which is then used to produce the final image. You can supply the preprocessed image yourself, in which case you should set the preprocessor to *None*. This is extremely powerful with external tools such as Blender.
|
272 |
|
273 |
In the Settings tab there is a ControlNet section where you can enable *multiple controlnets at once*. One particularly good example is depth+openpose, to get a specific character pose in a specific environment, or even a specific pose with specific hand gestures.
|
274 |
|
275 |
+
I would also recommend the Scribble model, which lets you draw a crude sketch and turn it into a finished piece with the help of your prompt.
|
|
|
276 |
There are also alternative **diff** versions of each ControlNet model, which produce slightly different results. You can [try them](https://civitai.com/models/9868/controlnet-pre-trained-difference-models) if you want, but I personally haven't.
|
277 |
|
278 |
# Lora Training <a name="train"></a>o[▲](#index)
|