Spaces:
Sleeping
Sleeping
liuq641968816
commited on
Commit
β’
7e57630
1
Parent(s):
9221291
Upload README.md
Browse files
README.md
CHANGED
@@ -1,73 +1,14 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
Our model checkpoints trained on [VITON-HD](https://github.com/shadow2496/VITON-HD) (half-body) and [Dress Code](https://github.com/aimagelab/dress-code) (full-body) have been released
|
16 |
-
|
17 |
-
* π€ [Hugging Face link](https://huggingface.co/levihsu/OOTDiffusion) for ***checkpoints*** (ootd, humanparsing, and openpose)
|
18 |
-
* π’π’ We support ONNX for [humanparsing](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing) now. Most environmental issues should have been addressed : )
|
19 |
-
* Please also download [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) into ***checkpoints*** folder
|
20 |
-
* We've only tested our code and models on Linux (Ubuntu 22.04)
|
21 |
-
|
22 |
-
![demo](images/demo.png)
|
23 |
-
![workflow](images/workflow.png)
|
24 |
-
|
25 |
-
## Installation
|
26 |
-
1. Clone the repository
|
27 |
-
|
28 |
-
```sh
|
29 |
-
git clone https://github.com/levihsu/OOTDiffusion
|
30 |
-
```
|
31 |
-
|
32 |
-
2. Create a conda environment and install the required packages
|
33 |
-
|
34 |
-
```sh
|
35 |
-
conda create -n ootd python==3.10
|
36 |
-
conda activate ootd
|
37 |
-
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2
|
38 |
-
pip install -r requirements.txt
|
39 |
-
```
|
40 |
-
|
41 |
-
## Inference
|
42 |
-
1. Half-body model
|
43 |
-
|
44 |
-
```sh
|
45 |
-
cd OOTDiffusion/run
|
46 |
-
python run_ootd.py --model_path <model-image-path> --cloth_path <cloth-image-path> --scale 2.0 --sample 4
|
47 |
-
```
|
48 |
-
|
49 |
-
2. Full-body model
|
50 |
-
|
51 |
-
> Garment category must be paired: 0 = upperbody; 1 = lowerbody; 2 = dress
|
52 |
-
|
53 |
-
```sh
|
54 |
-
cd OOTDiffusion/run
|
55 |
-
python run_ootd.py --model_path <model-image-path> --cloth_path <cloth-image-path> --model_type dc --category 2 --scale 2.0 --sample 4
|
56 |
-
```
|
57 |
-
|
58 |
-
## Citation
|
59 |
-
```
|
60 |
-
@article{xu2024ootdiffusion,
|
61 |
-
title={OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on},
|
62 |
-
author={Xu, Yuhao and Gu, Tao and Chen, Weifeng and Chen, Chengcai},
|
63 |
-
journal={arXiv preprint arXiv:2403.01779},
|
64 |
-
year={2024}
|
65 |
-
}
|
66 |
-
```
|
67 |
-
|
68 |
-
## TODO List
|
69 |
-
- [x] Paper
|
70 |
-
- [x] Gradio demo
|
71 |
-
- [x] Inference code
|
72 |
-
- [x] Model weights
|
73 |
-
- [ ] Training code
|
|
|
1 |
+
---
|
2 |
+
title: OOTDiffusion
|
3 |
+
emoji: π₯Όππ
|
4 |
+
colorFrom: yellow
|
5 |
+
colorTo: pink
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: 4.16.0
|
8 |
+
app_file: ./run/gradio_ootd.py
|
9 |
+
pinned: false
|
10 |
+
license: cc-by-nc-sa-4.0
|
11 |
+
short_description: High-quality virtual try-on ~ Your cyber fitting room
|
12 |
+
---
|
13 |
+
|
14 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|