hollowstrawberry
commited on
Commit
•
2f94760
1
Parent(s):
b45080e
Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,14 @@ tags:
|
|
13 |
* [Introduction](#intro)
|
14 |
* [Installation](#inst)
|
15 |
* [Getting Started](#start)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
# Introduction <a name="intro"></a>
|
18 |
|
@@ -30,14 +38,14 @@ The images you create may be used for any purpose, depending on the used model's
|
|
30 |
|
31 |
Before generating some images, here are some useful steps you can follow to improve your experience.
|
32 |
|
33 |
-
1. **Edit your starting parameters:** If you're using the collab, skip this step. If you're using the launcher, turn on **medvram** and **xformers**. Then, set your *Additional Launch Options* to: `--opt-channelslast --no-half-vae`. All of these should offer minor but significant improvements to performance.
|
34 |
* If your graphics card has more than 8 GB of VRAM, you may turn off medvram to make generations faster. However, medvram still allows you to generate larger images and more images at the same time.
|
35 |
* If your graphics card has 4 or 6 GB of VRAM, add `--opt-split-attention-v1` as it may lower vram usage even further.
|
36 |
* If you want to run the program from your computer but want to use it in another device, such as your phone, add `--listen`. Then, use your computer's local IP in the same WiFi network to access the interface.
|
37 |
* If you're using the original stable-diffusion-webui, you can add these parameters by editing your webui-user.bat, right next to `set COMMANDLINE_ARGS=`
|
38 |
* Full list of possible parameters [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
|
39 |
|
40 |
-
|
41 |
* For anime, [7th Heaven Mix](https://civitai.com/models/4669/corneos-7th-heaven-mix) has a nice aesthetic similar to anime movies, while [Abyss Orange Mix 3](https://civitai.com/models/9942/abyssorangemix3-aom3) *(__Note:__ scroll down and choose the AOM3 option)* offers more realism, advanced lighting, softer shading, and more lewdness. I remixed the two options above into [Heaven Orange Mix](https://civitai.com/models/14305/heavenorangemix).
|
42 |
* For creative art go with [DreamShaper](https://civitai.com/models/4384/dreamshaper), there are few options quite like it. An honorable mention goes to [Pastel Mix](https://civitai.com/models/5414/pastel-mix-stylized-anime-model), which has a beautiful and unique aesthetic with the addition of anime.
|
43 |
* For photorealism go with [Deliberate](https://civitai.com/models/4823/deliberate), it can do almost anything, specially photographs.
|
@@ -45,10 +53,9 @@ Before generating some images, here are some useful steps you can follow to impr
|
|
45 |
|
46 |
*Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
|
47 |
|
48 |
-
*Collab:* copy the **direct download link to the file** and put it in `MODEL_LINK:`. Turn on `safetensors`, and `Use_temp_storage` if you don't want to save it to your google drive.
|
49 |
|
50 |
-
|
51 |
-
4. **Getting a VAE:** Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
|
52 |
|
53 |
There are practically only 3 different VAEs out there worth talking about:
|
54 |
* [anime vae](https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt), also known as the AnythingV3 vae, also known as the orangemix vae. All anime models use this.
|
@@ -59,4 +66,33 @@ Before generating some images, here are some useful steps you can follow to impr
|
|
59 |
|
60 |
*Collab:* You will have to place it in your Google Drive, in `MyDrive/sd/stable-diffusion-webui/models/VAE`.
|
61 |
|
62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
* [Introduction](#intro)
|
14 |
* [Installation](#inst)
|
15 |
* [Getting Started](#start)
|
16 |
+
1. [Edit your starting parameters](#params)
|
17 |
+
1. [Getting a model](#model)
|
18 |
+
1. [Getting a VAE](#vae)
|
19 |
+
1. [Launching and settings](#launch)
|
20 |
+
1. [Prompts](#prompts)
|
21 |
+
1. [Adding extensions](#extensions)
|
22 |
+
|
23 |
+
|
24 |
|
25 |
# Introduction <a name="intro"></a>
|
26 |
|
|
|
38 |
|
39 |
Before generating some images, here are some useful steps you can follow to improve your experience.
|
40 |
|
41 |
+
1. **Edit your starting parameters:** <a name="params"></a> If you're using the collab, skip this step. If you're using the launcher, turn on **medvram** and **xformers**. Then, set your *Additional Launch Options* to: `--opt-channelslast --no-half-vae`. All of these should offer minor but significant improvements to performance.
|
42 |
* If your graphics card has more than 8 GB of VRAM, you may turn off medvram to make generations faster. However, medvram still allows you to generate larger images and more images at the same time.
|
43 |
* If your graphics card has 4 or 6 GB of VRAM, add `--opt-split-attention-v1` as it may lower vram usage even further.
|
44 |
* If you want to run the program from your computer but want to use it in another device, such as your phone, add `--listen`. Then, use your computer's local IP in the same WiFi network to access the interface.
|
45 |
* If you're using the original stable-diffusion-webui, you can add these parameters by editing your webui-user.bat, right next to `set COMMANDLINE_ARGS=`
|
46 |
* Full list of possible parameters [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Command-Line-Arguments-and-Settings)
|
47 |
|
48 |
+
1. **Getting a model:** <a name="model"></a> There are many options, most of which are on [civitai](https://civitai.com). But which to choose? These are my recommendations:
|
49 |
* For anime, [7th Heaven Mix](https://civitai.com/models/4669/corneos-7th-heaven-mix) has a nice aesthetic similar to anime movies, while [Abyss Orange Mix 3](https://civitai.com/models/9942/abyssorangemix3-aom3) *(__Note:__ scroll down and choose the AOM3 option)* offers more realism, advanced lighting, softer shading, and more lewdness. I remixed the two options above into [Heaven Orange Mix](https://civitai.com/models/14305/heavenorangemix).
|
50 |
* For creative art go with [DreamShaper](https://civitai.com/models/4384/dreamshaper), there are few options quite like it. An honorable mention goes to [Pastel Mix](https://civitai.com/models/5414/pastel-mix-stylized-anime-model), which has a beautiful and unique aesthetic with the addition of anime.
|
51 |
* For photorealism go with [Deliberate](https://civitai.com/models/4823/deliberate), it can do almost anything, specially photographs.
|
|
|
53 |
|
54 |
*Launcher:* It will let you choose the path to your models folder. Otherwise the models normally go into `stable-diffusion-webui/models/Stable-diffusion`.
|
55 |
|
56 |
+
*Collab:* copy the **direct download link to the file** and put it in `MODEL_LINK:`. Turn on `safetensors`, and `Use_temp_storage` if you don't want to save it to your google drive. After the first time you use the collab, you may place more models manually into your Google Drive folder at: `MyDrive/sd/stable-diffusion-webui/models/Stable-diffusion`
|
57 |
|
58 |
+
1. **Getting a VAE:** <a name="vae"></a> Most models don't come with a VAE built in. The VAE is a small separate model, which "converts your image from AI format into human format". Without it, you'll get faded colors and ugly eyes, among other things.
|
|
|
59 |
|
60 |
There are practically only 3 different VAEs out there worth talking about:
|
61 |
* [anime vae](https://huggingface.co/WarriorMama777/OrangeMixs/resolve/main/VAEs/orangemix.vae.pt), also known as the AnythingV3 vae, also known as the orangemix vae. All anime models use this.
|
|
|
66 |
|
67 |
*Collab:* You will have to place it in your Google Drive, in `MyDrive/sd/stable-diffusion-webui/models/VAE`.
|
68 |
|
69 |
+
1. **Launching and settings:** <a name="launch"></a> It is finally time to launch the WebUI.
|
70 |
+
*Launcher:* Press the button on the launcher and wait patiently for it to start. Then, it will open the interface in your browser. It's like a website, but on your computer.
|
71 |
+
*Collab:* Press the play buttons, **in order, one at a time**. Wait for each one to finish before pressing the next one. When the final step is finished, it will produce a link you can use to access the interface as a website. This will be open as long as the page stays open. You may also want to give it a password before starting.
|
72 |
+
|
73 |
+
The starting page is where you can make your images. But first, we'll go to the Settings tab. There will be sections on the left.
|
74 |
+
* In the *Stable Diffusion* section, scroll down and increase **Clip Skip** from 1 to 2. This is said to produce better images, specially for anime. You can also set your VAE from here, but I have a better idea:
|
75 |
+
* In the *User Interface* section, scroll down to **Quicksettings list** and change it to `sd_model_checkpoint, sd_vae`.
|
76 |
+
* Scroll back up, click the big orange **Apply settings** button, then **Reload UI** next to it. You can now change your model as well as your VAE from the top of the page at any time.
|
77 |
+
|
78 |
+
1. **Prompts:** <a name="prompts"></a>
|
79 |
+
|
80 |
+
On the first tab, **txt2img**, you'll be making most of your images. This is where you'll find your *prompt* and *negative prompt*.
|
81 |
+
Stable Diffusion is not like Midjourney or other popular image generation software, you can't just ask it what you want and get a good image. You have to be specific. *Very* specific.
|
82 |
+
I will show you an example of a prompt and negative prompt:
|
83 |
+
|
84 |
+
* Anime
|
85 |
+
* `2d, masterpiece, best quality, anime, highly detailed face, highly detailed eyes, highly detailed background, perfect lighting`
|
86 |
+
* `EasyNegative, worst quality, low quality, 3d, realistic, photorealistic, (loli, child, teen, baby face), zombie, animal, multiple views, text, watermark, signature, artist name, artist logo, censored`
|
87 |
+
|
88 |
+
* Photorealism
|
89 |
+
* `best quality, 4k, 8k, ultra highres, (realistic, photorealistic, RAW photo:1.4), (hdr, sharp focus:1.2), intricate texture, skin imperfections`
|
90 |
+
* `EasyNegative, worst quality, low quality, normal quality, child, painting, drawing, sketch, cartoon, anime, render, 3d, blurry, deformed, disfigured, morbid, mutated, bad anatomy, bad art`
|
91 |
+
|
92 |
+
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
93 |
+
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.safetensors) and put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
|
94 |
+
|
95 |
+
After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
|
96 |
+
You can also save your prompts to reuse later with the buttons below Generate. Click **Save style** and give it a name. Later, you can open your *Styles* dropdown to choose, then click *Apply selected styles to the current prompt*.
|
97 |
+
|
98 |
+
1. **Adding extensions:** <a name="extensions"></a>
|