DeepCoreB4 commited on
Commit
2041ed7
β€’
1 Parent(s): 6c9c58a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +165 -2
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- title: Test
3
  emoji: πŸ”₯
4
  colorFrom: purple
5
  colorTo: purple
@@ -10,4 +10,167 @@ pinned: false
10
  license: mit
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: stable-diffusion-webui-master
3
  emoji: πŸ”₯
4
  colorFrom: purple
5
  colorTo: purple
 
10
  license: mit
11
  ---
12
 
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
+
15
+ # Stable Diffusion web UI
16
+ A browser interface based on Gradio library for Stable Diffusion.
17
+
18
+ ![](screenshot.png)
19
+
20
+ ## Features
21
+ [Detailed feature showcase with images](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features):
22
+ - Original txt2img and img2img modes
23
+ - One click install and run script (but you still must install python and git)
24
+ - Outpainting
25
+ - Inpainting
26
+ - Color Sketch
27
+ - Prompt Matrix
28
+ - Stable Diffusion Upscale
29
+ - Attention, specify parts of text that the model should pay more attention to
30
+ - a man in a ((tuxedo)) - will pay more attention to tuxedo
31
+ - a man in a (tuxedo:1.21) - alternative syntax
32
+ - select text and press ctrl+up or ctrl+down to automatically adjust attention to selected text (code contributed by anonymous user)
33
+ - Loopback, run img2img processing multiple times
34
+ - X/Y/Z plot, a way to draw a 3 dimensional plot of images with different parameters
35
+ - Textual Inversion
36
+ - have as many embeddings as you want and use any names you like for them
37
+ - use multiple embeddings with different numbers of vectors per token
38
+ - works with half precision floating point numbers
39
+ - train embeddings on 8GB (also reports of 6GB working)
40
+ - Extras tab with:
41
+ - GFPGAN, neural network that fixes faces
42
+ - CodeFormer, face restoration tool as an alternative to GFPGAN
43
+ - RealESRGAN, neural network upscaler
44
+ - ESRGAN, neural network upscaler with a lot of third party models
45
+ - SwinIR and Swin2SR([see here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2092)), neural network upscalers
46
+ - LDSR, Latent diffusion super resolution upscaling
47
+ - Resizing aspect ratio options
48
+ - Sampling method selection
49
+ - Adjust sampler eta values (noise multiplier)
50
+ - More advanced noise setting options
51
+ - Interrupt processing at any time
52
+ - 4GB video card support (also reports of 2GB working)
53
+ - Correct seeds for batches
54
+ - Live prompt token length validation
55
+ - Generation parameters
56
+ - parameters you used to generate images are saved with that image
57
+ - in PNG chunks for PNG, in EXIF for JPEG
58
+ - can drag the image to PNG info tab to restore generation parameters and automatically copy them into UI
59
+ - can be disabled in settings
60
+ - drag and drop an image/text-parameters to promptbox
61
+ - Read Generation Parameters Button, loads parameters in promptbox to UI
62
+ - Settings page
63
+ - Running arbitrary python code from UI (must run with --allow-code to enable)
64
+ - Mouseover hints for most UI elements
65
+ - Possible to change defaults/mix/max/step values for UI elements via text config
66
+ - Tiling support, a checkbox to create images that can be tiled like textures
67
+ - Progress bar and live image generation preview
68
+ - Can use a separate neural network to produce previews with almost none VRAM or compute requirement
69
+ - Negative prompt, an extra text field that allows you to list what you don't want to see in generated image
70
+ - Styles, a way to save part of prompt and easily apply them via dropdown later
71
+ - Variations, a way to generate same image but with tiny differences
72
+ - Seed resizing, a way to generate same image but at slightly different resolution
73
+ - CLIP interrogator, a button that tries to guess prompt from an image
74
+ - Prompt Editing, a way to change prompt mid-generation, say to start making a watermelon and switch to anime girl midway
75
+ - Batch Processing, process a group of files using img2img
76
+ - Img2img Alternative, reverse Euler method of cross attention control
77
+ - Highres Fix, a convenience option to produce high resolution pictures in one click without usual distortions
78
+ - Reloading checkpoints on the fly
79
+ - Checkpoint Merger, a tab that allows you to merge up to 3 checkpoints into one
80
+ - [Custom scripts](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Custom-Scripts) with many extensions from community
81
+ - [Composable-Diffusion](https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/), a way to use multiple prompts at once
82
+ - separate prompts using uppercase `AND`
83
+ - also supports weights for prompts: `a cat :1.2 AND a dog AND a penguin :2.2`
84
+ - No token limit for prompts (original stable diffusion lets you use up to 75 tokens)
85
+ - DeepDanbooru integration, creates danbooru style tags for anime prompts
86
+ - [xformers](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers), major speed increase for select cards: (add --xformers to commandline args)
87
+ - via extension: [History tab](https://github.com/yfszzx/stable-diffusion-webui-images-browser): view, direct and delete images conveniently within the UI
88
+ - Generate forever option
89
+ - Training tab
90
+ - hypernetworks and embeddings options
91
+ - Preprocessing images: cropping, mirroring, autotagging using BLIP or deepdanbooru (for anime)
92
+ - Clip skip
93
+ - Hypernetworks
94
+ - Loras (same as Hypernetworks but more pretty)
95
+ - A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt.
96
+ - Can select to load a different VAE from settings screen
97
+ - Estimated completion time in progress bar
98
+ - API
99
+ - Support for dedicated [inpainting model](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion) by RunwayML.
100
+ - via extension: [Aesthetic Gradients](https://github.com/AUTOMATIC1111/stable-diffusion-webui-aesthetic-gradients), a way to generate images with a specific aesthetic by using clip images embeds (implementation of [https://github.com/vicgalle/stable-diffusion-aesthetic-gradients](https://github.com/vicgalle/stable-diffusion-aesthetic-gradients))
101
+ - [Stable Diffusion 2.0](https://github.com/Stability-AI/stablediffusion) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#stable-diffusion-20) for instructions
102
+ - [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
103
+ - Now without any bad letters!
104
+ - Load checkpoints in safetensors format
105
+ - Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
106
+ - Now with a license!
107
+ - Reorder elements in the UI from settings screen
108
+ -
109
+
110
+ ## Installation and Running
111
+ Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
112
+
113
+ Alternatively, use online services (like Google Colab):
114
+
115
+ - [List of Online Services](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Online-Services)
116
+
117
+ ### Automatic Installation on Windows
118
+ 1. Install [Python 3.10.6](https://www.python.org/downloads/windows/), checking "Add Python to PATH"
119
+ 2. Install [git](https://git-scm.com/download/win).
120
+ 3. Download the stable-diffusion-webui repository, for example by running `git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git`.
121
+ 4. Run `webui-user.bat` from Windows Explorer as normal, non-administrator, user.
122
+
123
+ ### Automatic Installation on Linux
124
+ 1. Install the dependencies:
125
+ ```bash
126
+ # Debian-based:
127
+ sudo apt install wget git python3 python3-venv
128
+ # Red Hat-based:
129
+ sudo dnf install wget git python3
130
+ # Arch-based:
131
+ sudo pacman -S wget git python3
132
+ ```
133
+ 2. To install in `/home/$(whoami)/stable-diffusion-webui/`, run:
134
+ ```bash
135
+ bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
136
+ ```
137
+ 3. Run `webui.sh`.
138
+ ### Installation on Apple Silicon
139
+
140
+ Find the instructions [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Installation-on-Apple-Silicon).
141
+
142
+ ## Contributing
143
+ Here's how to add code to this repo: [Contributing](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Contributing)
144
+
145
+ ## Documentation
146
+ The documentation was moved from this README over to the project's [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki).
147
+
148
+ ## Credits
149
+ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and also in `html/licenses.html` file.
150
+
151
+ - Stable Diffusion - https://github.com/CompVis/stable-diffusion, https://github.com/CompVis/taming-transformers
152
+ - k-diffusion - https://github.com/crowsonkb/k-diffusion.git
153
+ - GFPGAN - https://github.com/TencentARC/GFPGAN.git
154
+ - CodeFormer - https://github.com/sczhou/CodeFormer
155
+ - ESRGAN - https://github.com/xinntao/ESRGAN
156
+ - SwinIR - https://github.com/JingyunLiang/SwinIR
157
+ - Swin2SR - https://github.com/mv-lab/swin2sr
158
+ - LDSR - https://github.com/Hafiidz/latent-diffusion
159
+ - MiDaS - https://github.com/isl-org/MiDaS
160
+ - Ideas for optimizations - https://github.com/basujindal/stable-diffusion
161
+ - Cross Attention layer optimization - Doggettx - https://github.com/Doggettx/stable-diffusion, original idea for prompt editing.
162
+ - Cross Attention layer optimization - InvokeAI, lstein - https://github.com/invoke-ai/InvokeAI (originally http://github.com/lstein/stable-diffusion)
163
+ - Sub-quadratic Cross Attention layer optimization - Alex Birch (https://github.com/Birch-san/diffusers/pull/1), Amin Rezaei (https://github.com/AminRezaei0x443/memory-efficient-attention)
164
+ - Textual Inversion - Rinon Gal - https://github.com/rinongal/textual_inversion (we're not using his code, but we are using his ideas).
165
+ - Idea for SD upscale - https://github.com/jquesnelle/txt2imghd
166
+ - Noise generation for outpainting mk2 - https://github.com/parlance-zz/g-diffuser-bot
167
+ - CLIP interrogator idea and borrowing some code - https://github.com/pharmapsychotic/clip-interrogator
168
+ - Idea for Composable Diffusion - https://github.com/energy-based-model/Compositional-Visual-Generation-with-Composable-Diffusion-Models-PyTorch
169
+ - xformers - https://github.com/facebookresearch/xformers
170
+ - DeepDanbooru - interrogator for anime diffusers https://github.com/KichangKim/DeepDanbooru
171
+ - Sampling in float32 precision from a float16 UNet - marunine for the idea, Birch-san for the example Diffusers implementation (https://github.com/Birch-san/diffusers-play/tree/92feee6)
172
+ - Instruct pix2pix - Tim Brooks (star), Aleksander Holynski (star), Alexei A. Efros (no star) - https://github.com/timothybrooks/instruct-pix2pix
173
+ - Security advice - RyotaK
174
+ - UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
175
+ - Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
176
+ - (You)