radames commited on
Commit
285a57a
2 Parent(s): 32c28a7 cb60b56

Merge branch 'main' into space-sdturbo

Browse files
Files changed (33) hide show
  1. README.md +68 -30
  2. build-run.sh +1 -1
  3. frontend/src/lib/components/AspectRatioSelect.svelte +27 -0
  4. frontend/src/lib/components/ImagePlayer.svelte +30 -6
  5. frontend/src/lib/components/MediaListSwitcher.svelte +14 -7
  6. frontend/src/lib/components/VideoInput.svelte +14 -15
  7. frontend/src/lib/icons/aspect.svelte +10 -0
  8. frontend/src/lib/icons/expand.svelte +10 -0
  9. frontend/src/lib/mediaStream.ts +27 -5
  10. frontend/src/lib/utils.ts +43 -0
  11. frontend/src/routes/+page.svelte +4 -4
  12. frontend/svelte.config.js +2 -2
  13. requirements.txt +9 -9
  14. config.py → server/config.py +3 -8
  15. connection_manager.py → server/connection_manager.py +0 -0
  16. device.py → server/device.py +0 -0
  17. main.py → server/main.py +3 -1
  18. {pipelines → server/pipelines}/__init__.py +0 -0
  19. {pipelines → server/pipelines}/controlnet.py +2 -2
  20. {pipelines → server/pipelines}/controlnetLoraSD15.py +0 -0
  21. {pipelines → server/pipelines}/controlnetLoraSDXL.py +0 -0
  22. {pipelines → server/pipelines}/controlnetSDTurbo.py +2 -2
  23. {pipelines → server/pipelines}/controlnetSDXLTurbo.py +0 -0
  24. {pipelines → server/pipelines}/controlnetSegmindVegaRT.py +0 -0
  25. {pipelines → server/pipelines}/img2img.py +0 -0
  26. {pipelines → server/pipelines}/img2imgSDTurbo.py +2 -2
  27. {pipelines → server/pipelines}/img2imgSDXLTurbo.py +0 -0
  28. {pipelines → server/pipelines}/img2imgSegmindVegaRT.py +0 -0
  29. {pipelines → server/pipelines}/txt2img.py +0 -0
  30. {pipelines → server/pipelines}/txt2imgLora.py +0 -0
  31. {pipelines → server/pipelines}/txt2imgLoraSDXL.py +0 -0
  32. {pipelines → server/pipelines}/utils/canny_gpu.py +0 -0
  33. util.py → server/util.py +0 -0
README.md CHANGED
@@ -27,38 +27,39 @@ You need CUDA and Python 3.10, Node > 19, Mac with an M1/M2/M3 chip or Intel Arc
27
  ```bash
28
  python -m venv venv
29
  source venv/bin/activate
30
- pip3 install -r requirements.txt
31
  cd frontend && npm install && npm run build && cd ..
32
- # fastest pipeline
33
- python run.py --reload --pipeline img2imgSD21Turbo
34
  ```
35
 
36
- # Pipelines
37
- You can build your own pipeline following examples here [here](pipelines),
38
- don't forget to fuild the frontend first
39
  ```bash
40
  cd frontend && npm install && npm run build && cd ..
41
  ```
42
 
 
 
 
 
43
  # LCM
44
  ### Image to Image
45
 
46
  ```bash
47
- python run.py --reload --pipeline img2img
48
  ```
49
 
50
  # LCM
51
  ### Text to Image
52
 
53
  ```bash
54
- python run.py --reload --pipeline txt2img
55
  ```
56
 
57
  ### Image to Image ControlNet Canny
58
 
59
-
60
  ```bash
61
- python run.py --reload --pipeline controlnet
62
  ```
63
 
64
 
@@ -67,39 +68,73 @@ python run.py --reload --pipeline controlnet
67
  Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
68
 
69
 
70
-
71
  ### Image to Image ControlNet Canny LoRa
72
 
73
  ```bash
74
- python run.py --reload --pipeline controlnetLoraSD15
75
  ```
76
  or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
77
 
78
  ```bash
79
- python run.py --reload --pipeline controlnetLoraSDXL
80
  ```
81
 
82
  ### Text to Image
83
 
84
  ```bash
85
- python run.py --reload --pipeline txt2imgLora
86
  ```
87
 
88
- or
89
-
90
  ```bash
91
- python run.py --reload --pipeline txt2imgLoraSDXL
92
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
93
 
94
 
95
  ### Setting environment variables
96
 
97
 
98
- `TIMEOUT`: limit user session timeout
99
- `SAFETY_CHECKER`: disabled if you want NSFW filter off
100
- `MAX_QUEUE_SIZE`: limit number of users on current app instance
101
- `TORCH_COMPILE`: enable if you want to use torch compile for faster inference works well on A100 GPUs
102
- `USE_TAESD`: enable if you want to use Autoencoder Tiny
 
 
 
 
 
 
 
 
 
 
103
 
104
  If you run using `bash build-run.sh` you can set `PIPELINE` variables to choose the pipeline you want to run
105
 
@@ -110,14 +145,14 @@ PIPELINE=txt2imgLoraSDXL bash build-run.sh
110
  and setting environment variables
111
 
112
  ```bash
113
- TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python run.py --reload --pipeline txt2imgLoraSDXL
114
  ```
115
 
116
  If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my [comment](https://github.com/radames/Real-Time-Latent-Consistency-Model/issues/17#issuecomment-1811957196)
117
 
118
  ```bash
119
  openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
120
- python run.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
121
  ```
122
 
123
  ## Docker
@@ -141,15 +176,18 @@ or with environment variables
141
  ```bash
142
  docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live
143
  ```
144
- # Development Mode
145
-
146
 
147
- ```bash
148
- python run.py --reload
149
- ```
150
 
151
  # Demo on Hugging Face
152
 
153
- https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model
 
 
 
 
 
 
 
 
154
 
155
  https://github.com/radames/Real-Time-Latent-Consistency-Model/assets/102277/c4003ac5-e7ff-44c0-97d3-464bb659de70
 
27
  ```bash
28
  python -m venv venv
29
  source venv/bin/activate
30
+ pip3 install -r server/requirements.txt
31
  cd frontend && npm install && npm run build && cd ..
32
+ python server/main.py --reload --pipeline img2imgSDTurbo
 
33
  ```
34
 
35
+ Don't forget to fuild the frontend!!!
36
+
 
37
  ```bash
38
  cd frontend && npm install && npm run build && cd ..
39
  ```
40
 
41
+ # Pipelines
42
+ You can build your own pipeline following examples here [here](pipelines),
43
+
44
+
45
  # LCM
46
  ### Image to Image
47
 
48
  ```bash
49
+ python server/main.py --reload --pipeline img2img
50
  ```
51
 
52
  # LCM
53
  ### Text to Image
54
 
55
  ```bash
56
+ python server/main.py --reload --pipeline txt2img
57
  ```
58
 
59
  ### Image to Image ControlNet Canny
60
 
 
61
  ```bash
62
+ python server/main.py --reload --pipeline controlnet
63
  ```
64
 
65
 
 
68
  Using LCM-LoRA, giving it the super power of doing inference in as little as 4 steps. [Learn more here](https://huggingface.co/blog/lcm_lora) or [technical report](https://huggingface.co/papers/2311.05556)
69
 
70
 
 
71
  ### Image to Image ControlNet Canny LoRa
72
 
73
  ```bash
74
+ python server/main.py --reload --pipeline controlnetLoraSD15
75
  ```
76
  or SDXL, note that SDXL is slower than SD15 since the inference runs on 1024x1024 images
77
 
78
  ```bash
79
+ python server/main.py --reload --pipeline controlnetLoraSDXL
80
  ```
81
 
82
  ### Text to Image
83
 
84
  ```bash
85
+ python server/main.py --reload --pipeline txt2imgLora
86
  ```
87
 
 
 
88
  ```bash
89
+ python server/main.py --reload --pipeline txt2imgLoraSDXL
90
  ```
91
+ # Available Pipelines
92
+
93
+ #### [LCM](https://huggingface.co/SimianLuo/LCM_Dreamshaper_v7)
94
+
95
+ `img2img`
96
+ `txt2img`
97
+ `controlnet`
98
+ `txt2imgLora`
99
+ `controlnetLoraSD15`
100
+
101
+ #### [SD15](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
102
+ `controlnetLoraSDXL`
103
+ `txt2imgLoraSDXL`
104
+
105
+ #### [SDXL Turbo](https://huggingface.co/stabilityai/sd-xl-turbo)
106
+
107
+ `img2imgSDXLTurbo`
108
+ `controlnetSDXLTurbo`
109
+
110
+
111
+ #### [SDTurbo](https://huggingface.co/stabilityai/sd-turbo)
112
+ `img2imgSDTurbo`
113
+ `controlnetSDTurbo`
114
+
115
+ #### [Segmind-Vega](https://huggingface.co/segmind/Segmind-Vega)
116
+ `controlnetSegmindVegaRT`
117
+ `img2imgSegmindVegaRT`
118
 
119
 
120
  ### Setting environment variables
121
 
122
 
123
+ * `--host`: Host address (default: 0.0.0.0)
124
+ * `--port`: Port number (default: 7860)
125
+ * `--reload`: Reload code on change
126
+ * `--max-queue-size`: Maximum queue size (optional)
127
+ * `--timeout`: Timeout period (optional)
128
+ * `--safety-checker`: Enable Safety Checker (optional)
129
+ * `--torch-compile`: Use Torch Compile
130
+ * `--use-taesd` / `--no-taesd`: Use Tiny Autoencoder
131
+ * `--pipeline`: Pipeline to use (default: "txt2img")
132
+ * `--ssl-certfile`: SSL Certificate File (optional)
133
+ * `--ssl-keyfile`: SSL Key File (optional)
134
+ * `--debug`: Print Inference time
135
+ * `--compel`: Compel option
136
+ * `--sfast`: Enable Stable Fast
137
+ * `--onediff`: Enable OneDiff
138
 
139
  If you run using `bash build-run.sh` you can set `PIPELINE` variables to choose the pipeline you want to run
140
 
 
145
  and setting environment variables
146
 
147
  ```bash
148
+ TIMEOUT=120 SAFETY_CHECKER=True MAX_QUEUE_SIZE=4 python server/main.py --reload --pipeline txt2imgLoraSDXL
149
  ```
150
 
151
  If you're running locally and want to test it on Mobile Safari, the webserver needs to be served over HTTPS, or follow this instruction on my [comment](https://github.com/radames/Real-Time-Latent-Consistency-Model/issues/17#issuecomment-1811957196)
152
 
153
  ```bash
154
  openssl req -newkey rsa:4096 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
155
+ python server/main.py --reload --ssl-certfile=certificate.pem --ssl-keyfile=key.pem
156
  ```
157
 
158
  ## Docker
 
176
  ```bash
177
  docker run -ti -e PIPELINE=txt2imgLoraSDXL -p 7860:7860 --gpus all lcm-live
178
  ```
 
 
179
 
 
 
 
180
 
181
  # Demo on Hugging Face
182
 
183
+
184
+ * [radames/Real-Time-Latent-Consistency-Model](https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model)
185
+ * [radames/Real-Time-SD-Turbo](https://huggingface.co/spaces/radames/Real-Time-SD-Turbo)
186
+ * [latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5](https://huggingface.co/spaces/latent-consistency/Real-Time-LCM-ControlNet-Lora-SD1.5)
187
+ * [latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5](https://huggingface.co/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5)
188
+ * [radames/Real-Time-Latent-Consistency-Model-Text-To-Image](https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model-Text-To-Image)
189
+
190
+
191
+
192
 
193
  https://github.com/radames/Real-Time-Latent-Consistency-Model/assets/102277/c4003ac5-e7ff-44c0-97d3-464bb659de70
build-run.sh CHANGED
@@ -17,4 +17,4 @@ if [ -z ${COMPILE+x} ]; then
17
  fi
18
  echo -e "\033[1;32m\npipeline: $PIPELINE \033[0m"
19
  echo -e "\033[1;32m\ncompile: $COMPILE \033[0m"
20
- python3 main.py --port 7860 --host 0.0.0.0 --pipeline $PIPELINE $COMPILE
 
17
  fi
18
  echo -e "\033[1;32m\npipeline: $PIPELINE \033[0m"
19
  echo -e "\033[1;32m\ncompile: $COMPILE \033[0m"
20
+ python3 ./server/main.py --port 7860 --host 0.0.0.0 --pipeline $PIPELINE $COMPILE
frontend/src/lib/components/AspectRatioSelect.svelte ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ import { createEventDispatcher } from 'svelte';
3
+
4
+ let options: string[] = ['1:1', '16:9', '4:3', '3:2', '3:4', '9:16'];
5
+ export let aspectRatio: number = 1;
6
+ const dispatchEvent = createEventDispatcher();
7
+
8
+ function onChange(e: Event) {
9
+ const target = e.target as HTMLSelectElement;
10
+ const value = target.value;
11
+ const [width, height] = value.split(':').map((v) => parseInt(v));
12
+ aspectRatio = width / height;
13
+ dispatchEvent('change', aspectRatio);
14
+ }
15
+ </script>
16
+
17
+ <div class="relative">
18
+ <select
19
+ on:change={onChange}
20
+ title="Aspect Ratio"
21
+ class="border-1 block cursor-pointer rounded-md border-gray-800 border-opacity-50 bg-slate-100 bg-opacity-30 p-1 font-medium text-white"
22
+ >
23
+ {#each options as option, i}
24
+ <option value={option}>{option}</option>
25
+ {/each}
26
+ </select>
27
+ </div>
frontend/src/lib/components/ImagePlayer.svelte CHANGED
@@ -4,11 +4,14 @@
4
 
5
  import Button from '$lib/components/Button.svelte';
6
  import Floppy from '$lib/icons/floppy.svelte';
7
- import { snapImage } from '$lib/utils';
 
8
 
9
  $: isLCMRunning = $lcmLiveStatus !== LCMLiveStatus.DISCONNECTED;
10
  $: console.log('isLCMRunning', isLCMRunning);
11
  let imageEl: HTMLImageElement;
 
 
12
  async function takeSnapshot() {
13
  if (isLCMRunning) {
14
  await snapImage(imageEl, {
@@ -19,6 +22,18 @@
19
  });
20
  }
21
  }
 
 
 
 
 
 
 
 
 
 
 
 
22
  </script>
23
 
24
  <div
@@ -26,12 +41,21 @@
26
  >
27
  <!-- svelte-ignore a11y-missing-attribute -->
28
  {#if isLCMRunning}
29
- <img
30
- bind:this={imageEl}
31
- class="aspect-square w-full rounded-lg"
32
- src={'/api/stream/' + $streamId}
33
- />
 
 
34
  <div class="absolute bottom-1 right-1">
 
 
 
 
 
 
 
35
  <Button
36
  on:click={takeSnapshot}
37
  disabled={!isLCMRunning}
 
4
 
5
  import Button from '$lib/components/Button.svelte';
6
  import Floppy from '$lib/icons/floppy.svelte';
7
+ import Expand from '$lib/icons/expand.svelte';
8
+ import { snapImage, expandWindow } from '$lib/utils';
9
 
10
  $: isLCMRunning = $lcmLiveStatus !== LCMLiveStatus.DISCONNECTED;
11
  $: console.log('isLCMRunning', isLCMRunning);
12
  let imageEl: HTMLImageElement;
13
+ let expandedWindow: Window;
14
+ let isExpanded = false;
15
  async function takeSnapshot() {
16
  if (isLCMRunning) {
17
  await snapImage(imageEl, {
 
22
  });
23
  }
24
  }
25
+ async function toggleFullscreen() {
26
+ if (isLCMRunning && !isExpanded) {
27
+ expandedWindow = expandWindow('/api/stream/' + $streamId);
28
+ expandedWindow.addEventListener('beforeunload', () => {
29
+ isExpanded = false;
30
+ });
31
+ isExpanded = true;
32
+ } else {
33
+ expandedWindow?.close();
34
+ isExpanded = false;
35
+ }
36
+ }
37
  </script>
38
 
39
  <div
 
41
  >
42
  <!-- svelte-ignore a11y-missing-attribute -->
43
  {#if isLCMRunning}
44
+ {#if !isExpanded}
45
+ <img
46
+ bind:this={imageEl}
47
+ class="aspect-square w-full rounded-lg"
48
+ src={'/api/stream/' + $streamId}
49
+ />
50
+ {/if}
51
  <div class="absolute bottom-1 right-1">
52
+ <Button
53
+ on:click={toggleFullscreen}
54
+ title={'Expand Fullscreen'}
55
+ classList={'text-sm ml-auto text-white p-1 shadow-lg rounded-lg opacity-50'}
56
+ >
57
+ <Expand classList={''} />
58
+ </Button>
59
  <Button
60
  on:click={takeSnapshot}
61
  disabled={!isLCMRunning}
frontend/src/lib/components/MediaListSwitcher.svelte CHANGED
@@ -1,21 +1,28 @@
1
  <script lang="ts">
2
  import { mediaDevices, mediaStreamActions } from '$lib/mediaStream';
3
  import Screen from '$lib/icons/screen.svelte';
 
4
  import { onMount } from 'svelte';
5
 
6
  let deviceId: string = '';
 
 
 
 
 
7
  $: {
8
- console.log($mediaDevices);
9
  }
10
  $: {
11
- console.log(deviceId);
12
  }
13
- onMount(() => {
14
- deviceId = $mediaDevices[0].deviceId;
15
- });
16
  </script>
17
 
18
- <div class="flex items-center justify-center text-xs">
 
 
 
 
19
  <button
20
  title="Share your screen"
21
  class="border-1 my-1 flex cursor-pointer gap-1 rounded-md border-gray-500 border-opacity-50 bg-slate-100 bg-opacity-30 p-1 font-medium text-white"
@@ -28,7 +35,7 @@
28
  {#if $mediaDevices}
29
  <select
30
  bind:value={deviceId}
31
- on:change={() => mediaStreamActions.switchCamera(deviceId)}
32
  id="devices-list"
33
  class="border-1 block cursor-pointer rounded-md border-gray-800 border-opacity-50 bg-slate-100 bg-opacity-30 p-1 font-medium text-white"
34
  >
 
1
  <script lang="ts">
2
  import { mediaDevices, mediaStreamActions } from '$lib/mediaStream';
3
  import Screen from '$lib/icons/screen.svelte';
4
+ import AspectRatioSelect from './AspectRatioSelect.svelte';
5
  import { onMount } from 'svelte';
6
 
7
  let deviceId: string = '';
8
+ let aspectRatio: number = 1;
9
+
10
+ onMount(() => {
11
+ deviceId = $mediaDevices[0].deviceId;
12
+ });
13
  $: {
14
+ console.log(deviceId);
15
  }
16
  $: {
17
+ console.log(aspectRatio);
18
  }
 
 
 
19
  </script>
20
 
21
+ <div class="flex items-center justify-center text-xs backdrop-blur-sm backdrop-grayscale">
22
+ <AspectRatioSelect
23
+ bind:aspectRatio
24
+ on:change={() => mediaStreamActions.switchCamera(deviceId, aspectRatio)}
25
+ />
26
  <button
27
  title="Share your screen"
28
  class="border-1 my-1 flex cursor-pointer gap-1 rounded-md border-gray-500 border-opacity-50 bg-slate-100 bg-opacity-30 p-1 font-medium text-white"
 
35
  {#if $mediaDevices}
36
  <select
37
  bind:value={deviceId}
38
+ on:change={() => mediaStreamActions.switchCamera(deviceId, aspectRatio)}
39
  id="devices-list"
40
  class="border-1 block cursor-pointer rounded-md border-gray-800 border-opacity-50 bg-slate-100 bg-opacity-30 p-1 font-medium text-white"
41
  >
frontend/src/lib/components/VideoInput.svelte CHANGED
@@ -10,6 +10,7 @@
10
  mediaDevices
11
  } from '$lib/mediaStream';
12
  import MediaListSwitcher from './MediaListSwitcher.svelte';
 
13
  export let width = 512;
14
  export let height = 512;
15
  const size = { width, height };
@@ -32,6 +33,7 @@
32
  $: {
33
  console.log(selectedDevice);
34
  }
 
35
  onDestroy(() => {
36
  if (videoFrameCallbackId) videoEl.cancelVideoFrameCallback(videoFrameCallbackId);
37
  });
@@ -47,18 +49,15 @@
47
  }
48
  const videoWidth = videoEl.videoWidth;
49
  const videoHeight = videoEl.videoHeight;
50
- let height0 = videoHeight;
51
- let width0 = videoWidth;
52
- let x0 = 0;
53
- let y0 = 0;
54
- if (videoWidth > videoHeight) {
55
- width0 = videoHeight;
56
- x0 = (videoWidth - videoHeight) / 2;
57
- } else {
58
- height0 = videoWidth;
59
- y0 = (videoHeight - videoWidth) / 2;
60
- }
61
- ctx.drawImage(videoEl, x0, y0, width0, height0, 0, 0, size.width, size.height);
62
  const blob = await new Promise<Blob>((resolve) => {
63
  canvasEl.toBlob(
64
  (blob) => {
@@ -78,14 +77,14 @@
78
  </script>
79
 
80
  <div class="relative mx-auto max-w-lg overflow-hidden rounded-lg border border-slate-300">
81
- <div class="relative z-10 aspect-square w-full object-cover">
82
  {#if $mediaDevices.length > 0}
83
- <div class="absolute bottom-0 right-0 z-10">
84
  <MediaListSwitcher />
85
  </div>
86
  {/if}
87
  <video
88
- class="pointer-events-none aspect-square w-full object-cover"
89
  bind:this={videoEl}
90
  on:loadeddata={() => {
91
  videoIsReady = true;
 
10
  mediaDevices
11
  } from '$lib/mediaStream';
12
  import MediaListSwitcher from './MediaListSwitcher.svelte';
13
+
14
  export let width = 512;
15
  export let height = 512;
16
  const size = { width, height };
 
33
  $: {
34
  console.log(selectedDevice);
35
  }
36
+
37
  onDestroy(() => {
38
  if (videoFrameCallbackId) videoEl.cancelVideoFrameCallback(videoFrameCallbackId);
39
  });
 
49
  }
50
  const videoWidth = videoEl.videoWidth;
51
  const videoHeight = videoEl.videoHeight;
52
+ // scale down video to fit canvas, size.width, size.height
53
+ const scale = Math.min(size.width / videoWidth, size.height / videoHeight);
54
+ const width0 = videoWidth * scale;
55
+ const height0 = videoHeight * scale;
56
+ const x0 = (size.width - width0) / 2;
57
+ const y0 = (size.height - height0) / 2;
58
+ ctx.clearRect(0, 0, size.width, size.height);
59
+ ctx.drawImage(videoEl, x0, y0, width0, height0);
60
+
 
 
 
61
  const blob = await new Promise<Blob>((resolve) => {
62
  canvasEl.toBlob(
63
  (blob) => {
 
77
  </script>
78
 
79
  <div class="relative mx-auto max-w-lg overflow-hidden rounded-lg border border-slate-300">
80
+ <div class="relative z-10 flex aspect-square w-full items-center justify-center object-cover">
81
  {#if $mediaDevices.length > 0}
82
+ <div class="absolute bottom-0 right-0 z-10 w-full bg-slate-400 bg-opacity-40">
83
  <MediaListSwitcher />
84
  </div>
85
  {/if}
86
  <video
87
+ class="pointer-events-none aspect-square w-full justify-center object-contain"
88
  bind:this={videoEl}
89
  on:loadeddata={() => {
90
  videoIsReady = true;
frontend/src/lib/icons/aspect.svelte ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ export let classList: string = '';
3
+ </script>
4
+
5
+ <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512" height="16px" class={classList}>
6
+ <path
7
+ fill="currentColor"
8
+ d="M32 32C14.3 32 0 46.3 0 64v96c0 17.7 14.3 32 32 32s32-14.3 32-32V96h64c17.7 0 32-14.3 32-32s-14.3-32-32-32H32zM64 352c0-17.7-14.3-32-32-32s-32 14.3-32 32v96c0 17.7 14.3 32 32 32h96c17.7 0 32-14.3 32-32s-14.3-32-32-32H64V352zM320 32c-17.7 0-32 14.3-32 32s14.3 32 32 32h64v64c0 17.7 14.3 32 32 32s32-14.3 32-32V64c0-17.7-14.3-32-32-32H320zM448 352c0-17.7-14.3-32-32-32s-32 14.3-32 32v64H320c-17.7 0-32 14.3-32 32s14.3 32 32 32h96c17.7 0 32-14.3 32-32V352z"
9
+ />
10
+ </svg>
frontend/src/lib/icons/expand.svelte ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ <script lang="ts">
2
+ export let classList: string = '';
3
+ </script>
4
+
5
+ <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512" height="1em" class={classList}>
6
+ <path
7
+ fill="currentColor"
8
+ d="M.3 89.5C.1 91.6 0 93.8 0 96V224 416c0 35.3 28.7 64 64 64l384 0c35.3 0 64-28.7 64-64V224 96c0-35.3-28.7-64-64-64H64c-2.2 0-4.4 .1-6.5 .3c-9.2 .9-17.8 3.8-25.5 8.2C21.8 46.5 13.4 55.1 7.7 65.5c-3.9 7.3-6.5 15.4-7.4 24zM48 224H464l0 192c0 8.8-7.2 16-16 16L64 432c-8.8 0-16-7.2-16-16l0-192z"
9
+ />
10
+ </svg>
frontend/src/lib/mediaStream.ts CHANGED
@@ -1,5 +1,6 @@
1
- import { writable, type Writable, get } from 'svelte/store';
2
 
 
3
  export enum MediaStreamStatusEnum {
4
  INIT = "init",
5
  CONNECTED = "connected",
@@ -23,11 +24,17 @@ export const mediaStreamActions = {
23
  console.error(err);
24
  });
25
  },
26
- async start(mediaDevicedID?: string) {
27
  const constraints = {
28
  audio: false,
29
  video: {
30
- width: 1024, height: 1024, deviceId: mediaDevicedID
 
 
 
 
 
 
31
  }
32
  };
33
 
@@ -36,6 +43,7 @@ export const mediaStreamActions = {
36
  .then((stream) => {
37
  mediaStreamStatus.set(MediaStreamStatusEnum.CONNECTED);
38
  mediaStream.set(stream);
 
39
  })
40
  .catch((err) => {
41
  console.error(`${err.name}: ${err.message}`);
@@ -65,19 +73,33 @@ export const mediaStreamActions = {
65
  console.log(JSON.stringify(videoTrack.getConstraints(), null, 2));
66
  mediaStreamStatus.set(MediaStreamStatusEnum.CONNECTED);
67
  mediaStream.set(captureStream)
 
 
 
 
68
  } catch (err) {
69
  console.error(err);
70
  }
71
 
72
  },
73
- async switchCamera(mediaDevicedID: string) {
 
74
  if (get(mediaStreamStatus) !== MediaStreamStatusEnum.CONNECTED) {
75
  return;
76
  }
77
  const constraints = {
78
  audio: false,
79
- video: { width: 1024, height: 1024, deviceId: mediaDevicedID }
 
 
 
 
 
 
 
 
80
  };
 
81
  await navigator.mediaDevices
82
  .getUserMedia(constraints)
83
  .then((stream) => {
 
1
+ import { writable, type Writable, type Readable, get, derived } from 'svelte/store';
2
 
3
+ const BASE_HEIGHT = 720;
4
  export enum MediaStreamStatusEnum {
5
  INIT = "init",
6
  CONNECTED = "connected",
 
24
  console.error(err);
25
  });
26
  },
27
+ async start(mediaDevicedID?: string, aspectRatio: number = 1) {
28
  const constraints = {
29
  audio: false,
30
  video: {
31
+ width: {
32
+ ideal: BASE_HEIGHT * aspectRatio,
33
+ },
34
+ height: {
35
+ ideal: BASE_HEIGHT,
36
+ },
37
+ deviceId: mediaDevicedID
38
  }
39
  };
40
 
 
43
  .then((stream) => {
44
  mediaStreamStatus.set(MediaStreamStatusEnum.CONNECTED);
45
  mediaStream.set(stream);
46
+
47
  })
48
  .catch((err) => {
49
  console.error(`${err.name}: ${err.message}`);
 
73
  console.log(JSON.stringify(videoTrack.getConstraints(), null, 2));
74
  mediaStreamStatus.set(MediaStreamStatusEnum.CONNECTED);
75
  mediaStream.set(captureStream)
76
+
77
+ const capabilities = videoTrack.getCapabilities();
78
+ const aspectRatio = capabilities.aspectRatio;
79
+ console.log('Aspect Ratio Constraints:', aspectRatio);
80
  } catch (err) {
81
  console.error(err);
82
  }
83
 
84
  },
85
+ async switchCamera(mediaDevicedID: string, aspectRatio: number) {
86
+ console.log("Switching camera");
87
  if (get(mediaStreamStatus) !== MediaStreamStatusEnum.CONNECTED) {
88
  return;
89
  }
90
  const constraints = {
91
  audio: false,
92
+ video: {
93
+ width: {
94
+ ideal: BASE_HEIGHT * aspectRatio,
95
+ },
96
+ height: {
97
+ ideal: BASE_HEIGHT,
98
+ },
99
+ deviceId: mediaDevicedID
100
+ }
101
  };
102
+ console.log("Switching camera", constraints);
103
  await navigator.mediaDevices
104
  .getUserMedia(constraints)
105
  .then((stream) => {
frontend/src/lib/utils.ts CHANGED
@@ -36,3 +36,46 @@ export function snapImage(imageEl: HTMLImageElement, info: IImageInfo) {
36
  console.log(err);
37
  }
38
  }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
  console.log(err);
37
  }
38
  }
39
+
40
+ export function expandWindow(steramURL: string): Window {
41
+ const html = `
42
+ <html>
43
+ <head>
44
+ <title>Real-Time Latent Consistency Model</title>
45
+ <style>
46
+ body {
47
+ margin: 0;
48
+ padding: 0;
49
+ background-color: black;
50
+ }
51
+ </style>
52
+ </head>
53
+ <body>
54
+ <script>
55
+ let isFullscreen = false;
56
+ window.onkeydown = function(event) {
57
+ switch (event.code) {
58
+ case "Escape":
59
+ window.close();
60
+ break;
61
+ case "Enter":
62
+ if (isFullscreen) {
63
+ document.exitFullscreen();
64
+ isFullscreen = false;
65
+ } else {
66
+ document.documentElement.requestFullscreen();
67
+ isFullscreen = true;
68
+ }
69
+ break;
70
+ }
71
+ }
72
+ </script>
73
+
74
+ <img src="${steramURL}" style="width: 100%; height: 100%; object-fit: contain;" />
75
+ </body>
76
+ </html>
77
+ `;
78
+ const newWindow = window.open("", "_blank", "width=1024,height=1024,scrollbars=0,resizable=1,toolbar=0,menubar=0,location=0,directories=0,status=0") as Window;
79
+ newWindow.document.write(html);
80
+ return newWindow;
81
+ }
frontend/src/routes/+page.svelte CHANGED
@@ -113,19 +113,19 @@
113
  {/if}
114
  </article>
115
  {#if pipelineParams}
116
- <article class="my-3 grid grid-cols-1 gap-3 sm:grid-cols-2">
117
  {#if isImageMode}
118
- <div class="sm:col-start-1">
119
  <VideoInput
120
  width={Number(pipelineParams.width.default)}
121
  height={Number(pipelineParams.height.default)}
122
  ></VideoInput>
123
  </div>
124
  {/if}
125
- <div class={isImageMode ? 'sm:col-start-2' : 'col-span-2'}>
126
  <ImagePlayer />
127
  </div>
128
- <div class="sm:col-span-2">
129
  <Button on:click={toggleLcmLive} {disabled} classList={'text-lg my-1 p-2'}>
130
  {#if isLCMRunning}
131
  Stop
 
113
  {/if}
114
  </article>
115
  {#if pipelineParams}
116
+ <article class="my-3 grid grid-cols-1 gap-3 sm:grid-cols-4">
117
  {#if isImageMode}
118
+ <div class="col-span-2 sm:col-start-1">
119
  <VideoInput
120
  width={Number(pipelineParams.width.default)}
121
  height={Number(pipelineParams.height.default)}
122
  ></VideoInput>
123
  </div>
124
  {/if}
125
+ <div class={isImageMode ? 'col-span-2 sm:col-start-3' : 'col-span-4'}>
126
  <ImagePlayer />
127
  </div>
128
+ <div class="sm:col-span-4 sm:row-start-2">
129
  <Button on:click={toggleLcmLive} {disabled} classList={'text-lg my-1 p-2'}>
130
  {#if isLCMRunning}
131
  Stop
frontend/svelte.config.js CHANGED
@@ -5,8 +5,8 @@ const config = {
5
  preprocess: vitePreprocess({ postcss: true }),
6
  kit: {
7
  adapter: adapter({
8
- pages: '../public',
9
- assets: '../public',
10
  fallback: undefined,
11
  precompress: false,
12
  strict: true
 
5
  preprocess: vitePreprocess({ postcss: true }),
6
  kit: {
7
  adapter: adapter({
8
+ pages: 'public',
9
+ assets: 'public',
10
  fallback: undefined,
11
  precompress: false,
12
  strict: true
requirements.txt CHANGED
@@ -1,16 +1,16 @@
1
- diffusers==0.24.0
2
- transformers==4.35.2
3
  --extra-index-url https://download.pytorch.org/whl/cu121;
4
  torch
5
- fastapi==0.104.1
6
- uvicorn[standard]==0.24.0.post1
7
- Pillow==10.1.0
8
- accelerate==0.24.0
9
  compel==2.0.2
10
  controlnet-aux==0.0.7
11
  peft==0.6.0
12
  xformers; sys_platform != 'darwin' or platform_machine != 'arm64'
13
  markdown2
14
- stable_fast @ https://github.com/chengzeyi/stable-fast/releases/download/v0.0.15.post1/stable_fast-0.0.15.post1+torch211cu121-cp310-cp310-manylinux2014_x86_64.whl
15
- oneflow @ https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu121/794a56cc787217f46b21f5cbc84f65295664b82c/oneflow-0.9.1%2Bcu121.git.794a56c-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
16
- git+https://github.com/Oneflow-Inc/onediff.git@main#egg=onediff
 
1
+ git+https://github.com/huggingface/diffusers
2
+ transformers==4.36.2
3
  --extra-index-url https://download.pytorch.org/whl/cu121;
4
  torch
5
+ fastapi==0.108.0
6
+ uvicorn[standard]==0.25.0
7
+ Pillow==10.2.0
8
+ accelerate==0.25.0
9
  compel==2.0.2
10
  controlnet-aux==0.0.7
11
  peft==0.6.0
12
  xformers; sys_platform != 'darwin' or platform_machine != 'arm64'
13
  markdown2
14
+ stable_fast @ https://github.com/chengzeyi/stable-fast/releases/download/v1.0.2/stable_fast-1.0.2+torch211cu121-cp310-cp310-manylinux2014_x86_64.whl
15
+ oneflow @ https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu121/a0df8f27528ab5d55211b05e809c6ce3e1070f29/oneflow-0.9.1.dev20240104%2Bcu121-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
16
+ git+https://github.com/siliconflow/onediff.git@main#egg=onediff
config.py → server/config.py RENAMED
@@ -7,7 +7,6 @@ class Args(NamedTuple):
7
  host: str
8
  port: int
9
  reload: bool
10
- mode: str
11
  max_queue_size: int
12
  timeout: float
13
  safety_checker: bool
@@ -17,7 +16,7 @@ class Args(NamedTuple):
17
  ssl_certfile: str
18
  ssl_keyfile: str
19
  sfast: bool
20
- oneflow: bool = False
21
  compel: bool = False
22
  debug: bool = False
23
 
@@ -35,15 +34,11 @@ TORCH_COMPILE = os.environ.get("TORCH_COMPILE", None) == "True"
35
  USE_TAESD = os.environ.get("USE_TAESD", "True") == "True"
36
  default_host = os.getenv("HOST", "0.0.0.0")
37
  default_port = int(os.getenv("PORT", "7860"))
38
- default_mode = os.getenv("MODE", "default")
39
 
40
  parser = argparse.ArgumentParser(description="Run the app")
41
  parser.add_argument("--host", type=str, default=default_host, help="Host address")
42
  parser.add_argument("--port", type=int, default=default_port, help="Port number")
43
  parser.add_argument("--reload", action="store_true", help="Reload code on change")
44
- parser.add_argument(
45
- "--mode", type=str, default=default_mode, help="App Inferece Mode: txt2img, img2img"
46
- )
47
  parser.add_argument(
48
  "--max-queue-size",
49
  dest="max_queue_size",
@@ -117,10 +112,10 @@ parser.add_argument(
117
  help="Enable Stable Fast",
118
  )
119
  parser.add_argument(
120
- "--oneflow",
121
  action="store_true",
122
  default=False,
123
- help="Enable OneFlow",
124
  )
125
  parser.set_defaults(taesd=USE_TAESD)
126
 
 
7
  host: str
8
  port: int
9
  reload: bool
 
10
  max_queue_size: int
11
  timeout: float
12
  safety_checker: bool
 
16
  ssl_certfile: str
17
  ssl_keyfile: str
18
  sfast: bool
19
+ onediff: bool = False
20
  compel: bool = False
21
  debug: bool = False
22
 
 
34
  USE_TAESD = os.environ.get("USE_TAESD", "True") == "True"
35
  default_host = os.getenv("HOST", "0.0.0.0")
36
  default_port = int(os.getenv("PORT", "7860"))
 
37
 
38
  parser = argparse.ArgumentParser(description="Run the app")
39
  parser.add_argument("--host", type=str, default=default_host, help="Host address")
40
  parser.add_argument("--port", type=int, default=default_port, help="Port number")
41
  parser.add_argument("--reload", action="store_true", help="Reload code on change")
 
 
 
42
  parser.add_argument(
43
  "--max-queue-size",
44
  dest="max_queue_size",
 
112
  help="Enable Stable Fast",
113
  )
114
  parser.add_argument(
115
+ "--onediff",
116
  action="store_true",
117
  default=False,
118
+ help="Enable OneDiff",
119
  )
120
  parser.set_defaults(taesd=USE_TAESD)
121
 
connection_manager.py → server/connection_manager.py RENAMED
File without changes
device.py → server/device.py RENAMED
File without changes
main.py → server/main.py RENAMED
@@ -155,7 +155,9 @@ class App:
155
  if not os.path.exists("public"):
156
  os.makedirs("public")
157
 
158
- self.app.mount("/", StaticFiles(directory="public", html=True), name="public")
 
 
159
 
160
 
161
  pipeline_class = get_pipeline_class(config.pipeline)
 
155
  if not os.path.exists("public"):
156
  os.makedirs("public")
157
 
158
+ self.app.mount(
159
+ "/", StaticFiles(directory="frontend/public", html=True), name="public"
160
+ )
161
 
162
 
163
  pipeline_class = get_pipeline_class(config.pipeline)
{pipelines → server/pipelines}/__init__.py RENAMED
File without changes
{pipelines → server/pipelines}/controlnet.py RENAMED
@@ -187,8 +187,8 @@ class Pipeline:
187
  config.enable_cuda_graph = True
188
  self.pipe = compile(self.pipe, config=config)
189
 
190
- if args.oneflow:
191
- print("\nRunning oneflow compile\n")
192
  from onediff.infer_compiler import oneflow_compile
193
 
194
  self.pipe.unet = oneflow_compile(self.pipe.unet)
 
187
  config.enable_cuda_graph = True
188
  self.pipe = compile(self.pipe, config=config)
189
 
190
+ if args.onediff:
191
+ print("\nRunning onediff compile\n")
192
  from onediff.infer_compiler import oneflow_compile
193
 
194
  self.pipe.unet = oneflow_compile(self.pipe.unet)
{pipelines → server/pipelines}/controlnetLoraSD15.py RENAMED
File without changes
{pipelines → server/pipelines}/controlnetLoraSDXL.py RENAMED
File without changes
{pipelines → server/pipelines}/controlnetSDTurbo.py RENAMED
@@ -194,8 +194,8 @@ class Pipeline:
194
  config.enable_cuda_graph = True
195
  self.pipe = compile(self.pipe, config=config)
196
 
197
- if args.oneflow:
198
- print("\nRunning oneflow compile\n")
199
  from onediff.infer_compiler import oneflow_compile
200
 
201
  self.pipe.unet = oneflow_compile(self.pipe.unet)
 
194
  config.enable_cuda_graph = True
195
  self.pipe = compile(self.pipe, config=config)
196
 
197
+ if args.onediff:
198
+ print("\nRunning onediff compile\n")
199
  from onediff.infer_compiler import oneflow_compile
200
 
201
  self.pipe.unet = oneflow_compile(self.pipe.unet)
{pipelines → server/pipelines}/controlnetSDXLTurbo.py RENAMED
File without changes
{pipelines → server/pipelines}/controlnetSegmindVegaRT.py RENAMED
File without changes
{pipelines → server/pipelines}/img2img.py RENAMED
File without changes
{pipelines → server/pipelines}/img2imgSDTurbo.py RENAMED
@@ -121,8 +121,8 @@ class Pipeline:
121
  config.enable_cuda_graph = True
122
  self.pipe = compile(self.pipe, config=config)
123
 
124
- if args.oneflow:
125
- print("\nRunning oneflow compile\n")
126
  from onediff.infer_compiler import oneflow_compile
127
 
128
  self.pipe.unet = oneflow_compile(self.pipe.unet)
 
121
  config.enable_cuda_graph = True
122
  self.pipe = compile(self.pipe, config=config)
123
 
124
+ if args.onediff:
125
+ print("\nRunning onediff compile\n")
126
  from onediff.infer_compiler import oneflow_compile
127
 
128
  self.pipe.unet = oneflow_compile(self.pipe.unet)
{pipelines → server/pipelines}/img2imgSDXLTurbo.py RENAMED
File without changes
{pipelines → server/pipelines}/img2imgSegmindVegaRT.py RENAMED
File without changes
{pipelines → server/pipelines}/txt2img.py RENAMED
File without changes
{pipelines → server/pipelines}/txt2imgLora.py RENAMED
File without changes
{pipelines → server/pipelines}/txt2imgLoraSDXL.py RENAMED
File without changes
{pipelines → server/pipelines}/utils/canny_gpu.py RENAMED
File without changes
util.py → server/util.py RENAMED
File without changes