radames HF staff commited on
Commit
e64d439
1 Parent(s): d8457bc
Files changed (1) hide show
  1. README.md +5 -186
README.md CHANGED
@@ -1,193 +1,12 @@
1
  ---
2
- title: Real-Time Latent Consistency Model Image-to-Image ControlNet
3
- emoji: 🖼️🖼️
4
- colorFrom: gray
5
- colorTo: indigo
6
  sdk: docker
7
  pinned: false
8
  suggested_hardware: a10g-small
9
  disable_embedding: true
10
  ---
11
 
12
- # Real-Time Latent Consistency Model
13
-
14
- This demo showcases [Latent Consistency Model (LCM)](https://latent-consistency-models.github.io/) using [Diffusers](https://huggingface.co/docs/diffusers/using-diffusers/lcm) with a MJPEG stream server. You can read more about LCM + LoRAs with diffusers [here](https://huggingface.co/blog/lcm_lora).
15
-
16
- You need a webcam to run this demo. 🤗
17
-
18
- See a collecting with live demos [here](https://huggingface.co/collections/latent-consistency/latent-consistency-model-demos-654e90c52adb0688a0acbe6f)
19
-
20
- ## Running Locally
21
-
22
- You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc GPU
23
-
24
-
25
- ## Install
26
-
27
- ```bash
28
- python -m venv venv
29
- source venv/bin/activate
30
- pip3 install -r server/requirements.txt
31
- cd frontend && npm install && npm run build && cd ..
32
- python server/main.py --reload --pipeline img2imgSDTurbo
33
- ```
34
-
35
- Don't forget to fuild the frontend!!!
36
-
37
- ```bash
38
- cd frontend && npm install && npm run build && cd ..
39
- ```
40
-
41
- # Pipelines
42
- You can build your own pipeline following examples here [here](pipelines),
43
-
44
-
45
- # LCM
46
- ### Image to Image
47
-
48
- ```bash
49
- python server/main.py --reload --pipeline img2img
50
- ```
51
-
52
- # LCM
53
- ### Text to Image
54
-
55
- ```bash
56
- python server/main.py --reload --pipeline txt2img
57
- ```
58
-
59
- ### Image to Image ControlNet Canny
60
-
61
- ```bash
62
- python server/main.py --reload --pipeline controlnet
63
- ```
64
-
65
-
66
- # LCM + LoRa
67
-
68
- Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
69
-
70
-
71
- ### Image to Image ControlNet Canny LoRa
72
-
73
- ```bash
74
- python server/main.py --reload --pipeline controlnetLoraSD15
75
- ```
76
- or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
77
-
78
- ```bash
79
- python server/main.py --reload --pipeline controlnetLoraSDXL
80
- ```
81
-
82
- ### Text to Image
83
-
84
- ```bash
85
- python server/main.py --reload --pipeline txt2imgLora
86
- ```
87
-
88
- ```bash
89
- python server/main.py --reload --pipeline txt2imgLoraSDXL
90
- ```
91
- # Available Pipelines
92
-
93
- #### [LCM](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7)
94
-
95
- `img2img`
96
- `txt2img`
97
- `controlnet`
98
- `txt2imgLora`
99
- `controlnetLoraSD15`
100
-
101
- #### [SD15](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
102
- `controlnetLoraSDXL`
103
- `txt2imgLoraSDXL`
104
-
105
- #### [SDXL Turbo](https://huggingface.co/stabilityai/sd-xl-turbo)
106
-
107
- `img2imgSDXLTurbo`
108
- `controlnetSDXLTurbo`
109
-
110
-
111
- #### [SDTurbo](https://huggingface.co/stabilityai/sd-turbo)
112
- `img2imgSDTurbo`
113
- `controlnetSDTurbo`
114
-
115
- #### [Segmind-Vega](https://huggingface.co/segmind/Segmind-Vega)
116
- `controlnetSegmindVegaRT`
117
- `img2imgSegmindVegaRT`
118
-
119
-
120
- ### Setting environment variables
121
-
122
-
123
- * `--host`: Host address (default: 0.0.0.0)
124
- * `--port`: Port number (default: 7860)
125
- * `--reload`: Reload code on change
126
- * `--max-queue-size`: Maximum queue size (optional)
127
- * `--timeout`: Timeout period (optional)
128
- * `--safety-checker`: Enable Safety Checker (optional)
129
- * `--torch-compile`: Use Torch Compile
130
- * `--use-taesd` / `--no-taesd`: Use Tiny Autoencoder
131
- * `--pipeline`: Pipeline to use (default: "txt2img")
132
- * `--ssl-certfile`: SSL Certificate File (optional)
133
- * `--ssl-keyfile`: SSL Key File (optional)
134
- * `--debug`: Print Inference time
135
- * `--compel`: Compel option
136
- * `--sfast`: Enable Stable Fast
137
- * `--onediff`: Enable OneDiff
138
-
139
- If you run using `bash build-run.sh` you can set `PIPELINE` variables to choose the pipeline you want to run
140
-
141
- ```bash
142
- PIPELINE=txt2imgLoraSDXL bash build-run.sh
143
- ```
144
-
145
- and setting environment variables
146
-
147
- ```bash
148
- TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL
149
- ```
150
-
151
- If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my [comment](https://github.com/radames/Real-Time-Latent-Consistency-Model/issues/17#issuecomment-1811957196)
152
-
153
- ```bash
154
- openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
155
- python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
156
- ```
157
-
158
- ## Docker
159
-
160
- You need NVIDIA Container Toolkit for Docker, defaults to `controlnet``
161
-
162
- ```bash
163
- docker build -t lcm-live .
164
- docker run -ti -p 7860:7860 --gpus all lcm-live
165
- ```
166
-
167
- reuse models data from host to avoid downloading them again, you can change `~/.cache/huggingface` to any other directory, but if you use hugingface-cli locally, you can share the same cache
168
-
169
- ```bash
170
- docker run -ti -p 7860:7860 -e HF_HOME=/data -v ~/.cache/huggingface:/data --gpus all lcm-live
171
- ```
172
-
173
-
174
- or with environment variables
175
-
176
- ```bash
177
- docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live
178
- ```
179
-
180
-
181
- # Demo on Hugging Face
182
-
183
-
184
- * [radames/Real-Time-Latent-Consistency-Model](https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model)
185
- * [radames/Real-Time-SD-Turbo](https://huggingface.co/spaces/radames/Real-Time-SD-Turbo)
186
- * [latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5](https://huggingface.co/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5)
187
- * [latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5](https://huggingface.co/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5)
188
- * [radames/Real-Time-Latent-Consistency-Model-Text-To-Image](https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image)
189
-
190
-
191
-
192
-
193
- https://github.com/radames/Real-Time-Latent-Consistency-Model/assets/102277/c4003ac5-e7ff-44c0-97d3-464bb659de70
 
1
  ---
2
+ title: Real Time Img2img Turbo
3
+ emoji: 🔥🖼️🖼️
4
+ colorFrom: blue
5
+ colorTo: blue
6
  sdk: docker
7
  pinned: false
8
  suggested_hardware: a10g-small
9
  disable_embedding: true
10
  ---
11
 
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference