tlwu commited on
Commit
cd1da00
1 Parent(s): a222c20

add sdxl-turbo example

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.data filter=lfs diff=lfs merge=lfs -text
ORT_CUDA/sdxl-turbo/engine/README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: openrail++
3
+ base_model: stabilityai/sdxl-turbo
4
+ language:
5
+ - en
6
+ tags:
7
+ - stable-diffusion
8
+ - sdxl
9
+ - onnxruntime
10
+ - onnx
11
+ - text-to-image
12
+ ---
13
+
14
+
15
+ # Stable Diffusion XL 1.0 for ONNX Runtime
16
+
17
+ ## Introduction
18
+
19
+ This repository hosts the optimized versions of **SDXL Turbo** to accelerate inference with ONNX Runtime CUDA execution provider.
20
+
21
+ See the [usage instructions](#usage-example) for how to run the SDXL pipeline with the ONNX files hosted in this repository.
22
+
23
+ ## Model Description
24
+
25
+ - **Developed by:** Stability AI
26
+ - **Model type:** Diffusion-based text-to-image generative model
27
+ - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-xl-refiner-1.0/blob/main/LICENSE.md)
28
+ - **Model Description:** This is a conversion of the [SDXL-Turbo](https://huggingface.co/stabilityai/sdxl-turbo) model for [ONNX Runtime](https://github.com/microsoft/onnxruntime) inference with CUDA execution provider.
29
+
30
+ The VAE decoder is converted from [sdxl-vae-fp16-fix](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix). There are slight discrepancies between its output and that of the original VAE, but the decoded images should be [close enough for most purposes](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix/discussions/7#64c5c0f8e2e5c94bd04eaa80).
31
+
32
+ ## Performance Comparison
33
+
34
+ #### Latency for 30 steps base and 9 steps refiner
35
+
36
+ Below is average latency of generating an image of size 512x512 using NVIDIA A100-SXM4-80GB GPU:
37
+
38
+ | Engine | Batch Size | Steps | PyTorch 2.1 | ONNX Runtime CUDA |
39
+ |-------------|------------|------ | ----------------|-------------------|
40
+ | Static | 1 | 1 | 109.4 ms | 43.9 ms |
41
+ | Static | 4 | 1 | 247.0 ms | 121.1 ms |
42
+ | Static | 1 | 4 | 171.1 ms | 97.5 ms |
43
+ | Static | 4 | 4 | 390.5 ms | 248.0 ms |
44
+
45
+
46
+ Static means the engine is built for the given batch size and image size combination, and CUDA graph is used to speed up.
47
+
48
+ ## Usage Example
49
+
50
+ Following the [demo instructions](https://github.com/microsoft/onnxruntime/blob/main/onnxruntime/python/tools/transformers/models/stable_diffusion/README.md#run-demo-with-docker). Example steps:
51
+
52
+ 0. Install nvidia-docker using these [instructions](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
53
+
54
+ 1. Clone onnxruntime repository.
55
+ ```shell
56
+ git clone https://github.com/microsoft/onnxruntime
57
+ cd onnxruntime
58
+ ```
59
+
60
+ 2. Download the SDXL ONNX files from this repo
61
+ ```shell
62
+ git lfs install
63
+ git clone https://huggingface.co/tlwu/sdxl-turbo-onnxruntime
64
+ ```
65
+
66
+ 3. Launch the docker
67
+ ```shell
68
+ docker run --rm -it --gpus all -v $PWD:/workspace nvcr.io/nvidia/pytorch:23.10-py3 /bin/bash
69
+ ```
70
+
71
+ 4. Build ONNX Runtime from source
72
+ ```shell
73
+ export CUDACXX=/usr/local/cuda-12.2/bin/nvcc
74
+ git config --global --add safe.directory '*'
75
+ sh build.sh --config Release --build_shared_lib --parallel --use_cuda --cuda_version 12.2 \
76
+ --cuda_home /usr/local/cuda-12.2 --cudnn_home /usr/lib/x86_64-linux-gnu/ --build_wheel --skip_tests \
77
+ --use_tensorrt --tensorrt_home /usr/src/tensorrt \
78
+ --cmake_extra_defines onnxruntime_BUILD_UNIT_TESTS=OFF \
79
+ --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=80 \
80
+ --allow_running_as_root
81
+ python3 -m pip install build/Linux/Release/dist/onnxruntime_gpu-*-cp310-cp310-linux_x86_64.whl --force-reinstall
82
+ ```
83
+
84
+ If the GPU is not A100, change CMAKE_CUDA_ARCHITECTURES=80 in the command line according to the GPU compute capacity (like 89 for RTX 4090, or 86 for RTX 3090). If your machine has less than 64GB memory, replace --parallel by --parallel 4 --nvcc_threads 1 to avoid out of memory.
85
+
86
+ 5. Install libraries and requirements
87
+ ```shell
88
+ python3 -m pip install --upgrade pip
89
+ cd /workspace/onnxruntime/python/tools/transformers/models/stable_diffusion
90
+ python3 -m pip install -r requirements-cuda12.txt
91
+ python3 -m pip install --upgrade polygraphy onnx-graphsurgeon --extra-index-url https://pypi.ngc.nvidia.com
92
+ ```
93
+
94
+ 6. Perform ONNX Runtime optimized inference
95
+ ```shell
96
+ python3 demo_txt2img_xl.py \
97
+ "starry night over Golden Gate Bridge by van gogh" \
98
+ --version xl-turbo \
99
+ --width 1024 \
100
+ --height 1024 \
101
+ --denoising-steps 8 \
102
+ --work-dir /workspace/sdxl-turbo-onnxruntime
103
+ ```
ORT_CUDA/sdxl-turbo/engine/clip.ort_cuda.fp16/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c444caf5de76cdfc1d631b8df96c8206d90300e44c25c67b9f730dc411917ea2
3
+ size 246165433
ORT_CUDA/sdxl-turbo/engine/clip2.ort_cuda.fp16/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e22b14a375aae73df2df5d8b008e1a5d5584df5ac6c37e21f60d06f2c4706407
3
+ size 134410
ORT_CUDA/sdxl-turbo/engine/clip2.ort_cuda.fp16/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8a30895d8eecdc82b8fc36bfd5f18879b6bfaba6eb8ade8b05d04aa6dfdad5
3
+ size 1389319680
ORT_CUDA/sdxl-turbo/engine/unetxl.ort_cuda.fp16/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:127ce7791f6b46bd2ea113e6dc68acb6d1829e055bd7f7c11d37019c1fbdc5c7
3
+ size 704979
ORT_CUDA/sdxl-turbo/engine/unetxl.ort_cuda.fp16/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e84089ea857d55d40d32306f08aa53aa495dacf877d49eff47193741e3d416c0
3
+ size 5135092480
ORT_CUDA/sdxl-turbo/engine/unetxl_canny.ort_cuda.fp16/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:25a443d5b1442edb2669cf80c76733d13e572eb3078abe5b1c84dffb339a0916
3
+ size 1143058
ORT_CUDA/sdxl-turbo/engine/unetxl_canny.ort_cuda.fp16/model.onnx.data ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7498566d95287761ac9af9479b6b89c2853181664c135c56e7bac71a92b40f9
3
+ size 7637183488
ORT_CUDA/sdxl-turbo/engine/vae.ort_cuda.fp16/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f475291593170ccb03a4916bf6a92048750471d0c67d55e5424ff93f01ab1f4
3
+ size 99070385