Roopansh commited on
Commit
dd45176
1 Parent(s): 681ea86

Readme Changes

Browse files
Files changed (2) hide show
  1. README-2.md +162 -0
  2. README.md +10 -160
README-2.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ <div align="center">
3
+ <h1>IDM-VTON: Improving Diffusion Models for Authentic Virtual Try-on in the Wild</h1>
4
+
5
+ <a href='https://idm-vton.github.io'><img src='https://img.shields.io/badge/Project-Page-green'></a>
6
+ <a href='https://arxiv.org/abs/2403.05139'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
7
+ <a href='https://huggingface.co/spaces/yisol/IDM-VTON'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a>
8
+ <a href='https://huggingface.co/yisol/IDM-VTON'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
9
+
10
+
11
+ </div>
12
+
13
+ This is the official implementation of the paper ["Improving Diffusion Models for Authentic Virtual Try-on in the Wild"](https://arxiv.org/abs/2403.05139).
14
+
15
+ Star ⭐ us if you like it!
16
+
17
+ ---
18
+
19
+
20
+ <!-- ![teaser2](assets/teaser2.png)&nbsp;
21
+ ![teaser](assets/teaser.png)&nbsp; -->
22
+
23
+
24
+ ## TODO LIST
25
+
26
+
27
+ - [x] demo model
28
+ - [x] inference code
29
+ - [ ] training code
30
+
31
+
32
+
33
+ ## Requirements
34
+
35
+ ```
36
+ git clone https://github.com/yisol/IDM-VTON.git
37
+ cd IDM-VTON
38
+
39
+ conda env create -f environment.yaml
40
+ conda activate idm
41
+ ```
42
+
43
+ ## Data preparation
44
+
45
+ ### VITON-HD
46
+ You can download VITON-HD dataset from [VITON-HD](https://github.com/shadow2496/VITON-HD).
47
+
48
+ After download VITON-HD dataset, move vitonhd_test_tagged.json into the test folder.
49
+
50
+ Structure of the Dataset directory should be as follows.
51
+
52
+ ```
53
+
54
+ train
55
+ |-- ...
56
+
57
+ test
58
+ |-- image
59
+ |-- image-densepose
60
+ |-- agnostic-mask
61
+ |-- cloth
62
+ |-- vitonhd_test_tagged.json
63
+
64
+ ```
65
+
66
+ ### DressCode
67
+ You can download DressCode dataset from [DressCode](https://github.com/aimagelab/dress-code).
68
+
69
+ We provide pre-computed densepose images and captions for garments [here](https://kaistackr-my.sharepoint.com/:u:/g/personal/cpis7_kaist_ac_kr/EaIPRG-aiRRIopz9i002FOwBDa-0-BHUKVZ7Ia5yAVVG3A?e=YxkAip).
70
+
71
+ We used [detectron2](https://github.com/facebookresearch/detectron2) for obtaining densepose images, refer [here](https://github.com/sangyun884/HR-VITON/issues/45) for more details.
72
+
73
+ After download the DressCode dataset, place image-densepose directories and caption text files as follows.
74
+
75
+ ```
76
+ DressCode
77
+ |-- dresses
78
+ |-- images
79
+ |-- image-densepose
80
+ |-- dc_caption.txt
81
+ |-- ...
82
+ |-- lower_body
83
+ |-- images
84
+ |-- image-densepose
85
+ |-- dc_caption.txt
86
+ |-- ...
87
+ |-- upper_body
88
+ |-- images
89
+ |-- image-densepose
90
+ |-- dc_caption.txt
91
+ |-- ...
92
+ ```
93
+
94
+
95
+ ## Inference
96
+
97
+
98
+ ### VITON-HD
99
+
100
+ Inference using python file with arguments,
101
+
102
+ ```
103
+ accelerate launch inference.py \
104
+ --width 768 --height 1024 --num_inference_steps 30 \
105
+ --output_dir "result" \
106
+ --unpaired \
107
+ --data_dir "DATA_DIR" \
108
+ --seed 42 \
109
+ --test_batch_size 2 \
110
+ --guidance_scale 2.0
111
+ ```
112
+
113
+ or, you can simply run with the script file.
114
+
115
+ ```
116
+ sh inference.sh
117
+ ```
118
+
119
+ ### DressCode
120
+
121
+ For DressCode dataset, put the category you want to generate images via category argument,
122
+ ```
123
+ accelerate launch inference_dc.py \
124
+ --width 768 --height 1024 --num_inference_steps 30 \
125
+ --output_dir "result" \
126
+ --unpaired \
127
+ --data_dir "DATA_DIR" \
128
+ --seed 42
129
+ --test_batch_size 2
130
+ --guidance_scale 2.0
131
+ --category "upper_body"
132
+ ```
133
+
134
+ or, you can simply run with the script file.
135
+ ```
136
+ sh inference.sh
137
+ ```
138
+
139
+
140
+ ## Acknowledgements
141
+
142
+ For the [demo](https://huggingface.co/spaces/yisol/IDM-VTON), GPUs are supported from [ZeroGPU](https://huggingface.co/zero-gpu-explorers), and masking generation codes are based on [OOTDiffusion](https://github.com/levihsu/OOTDiffusion) and [DCI-VTON](https://github.com/bcmi/DCI-VTON-Virtual-Try-On).
143
+
144
+ Parts of our code are based on [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter).
145
+
146
+
147
+
148
+ ## Citation
149
+ ```
150
+ @article{choi2024improving,
151
+ title={Improving Diffusion Models for Virtual Try-on},
152
+ author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo},
153
+ journal={arXiv preprint arXiv:2403.05139},
154
+ year={2024}
155
+ }
156
+ ```
157
+
158
+ ## License
159
+ The codes and checkpoints in this repository are under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
160
+
161
+
162
+
README.md CHANGED
@@ -1,162 +1,12 @@
1
-
2
- <div align="center">
3
- <h1>IDM-VTON: Improving Diffusion Models for Authentic Virtual Try-on in the Wild</h1>
4
-
5
- <a href='https://idm-vton.github.io'><img src='https://img.shields.io/badge/Project-Page-green'></a>
6
- <a href='https://arxiv.org/abs/2403.05139'><img src='https://img.shields.io/badge/Paper-Arxiv-red'></a>
7
- <a href='https://huggingface.co/spaces/yisol/IDM-VTON'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Demo-blue'></a>
8
- <a href='https://huggingface.co/yisol/IDM-VTON'><img src='https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Model-blue'></a>
9
-
10
-
11
- </div>
12
-
13
- This is the official implementation of the paper ["Improving Diffusion Models for Authentic Virtual Try-on in the Wild"](https://arxiv.org/abs/2403.05139).
14
-
15
- Star ⭐ us if you like it!
16
-
17
  ---
18
 
19
-
20
- <!-- ![teaser2](assets/teaser2.png)&nbsp;
21
- ![teaser](assets/teaser.png)&nbsp; -->
22
-
23
-
24
- ## TODO LIST
25
-
26
-
27
- - [x] demo model
28
- - [x] inference code
29
- - [ ] training code
30
-
31
-
32
-
33
- ## Requirements
34
-
35
- ```
36
- git clone https://github.com/yisol/IDM-VTON.git
37
- cd IDM-VTON
38
-
39
- conda env create -f environment.yaml
40
- conda activate idm
41
- ```
42
-
43
- ## Data preparation
44
-
45
- ### VITON-HD
46
- You can download VITON-HD dataset from [VITON-HD](https://github.com/shadow2496/VITON-HD).
47
-
48
- After download VITON-HD dataset, move vitonhd_test_tagged.json into the test folder.
49
-
50
- Structure of the Dataset directory should be as follows.
51
-
52
- ```
53
-
54
- train
55
- |-- ...
56
-
57
- test
58
- |-- image
59
- |-- image-densepose
60
- |-- agnostic-mask
61
- |-- cloth
62
- |-- vitonhd_test_tagged.json
63
-
64
- ```
65
-
66
- ### DressCode
67
- You can download DressCode dataset from [DressCode](https://github.com/aimagelab/dress-code).
68
-
69
- We provide pre-computed densepose images and captions for garments [here](https://kaistackr-my.sharepoint.com/:u:/g/personal/cpis7_kaist_ac_kr/EaIPRG-aiRRIopz9i002FOwBDa-0-BHUKVZ7Ia5yAVVG3A?e=YxkAip).
70
-
71
- We used [detectron2](https://github.com/facebookresearch/detectron2) for obtaining densepose images, refer [here](https://github.com/sangyun884/HR-VITON/issues/45) for more details.
72
-
73
- After download the DressCode dataset, place image-densepose directories and caption text files as follows.
74
-
75
- ```
76
- DressCode
77
- |-- dresses
78
- |-- images
79
- |-- image-densepose
80
- |-- dc_caption.txt
81
- |-- ...
82
- |-- lower_body
83
- |-- images
84
- |-- image-densepose
85
- |-- dc_caption.txt
86
- |-- ...
87
- |-- upper_body
88
- |-- images
89
- |-- image-densepose
90
- |-- dc_caption.txt
91
- |-- ...
92
- ```
93
-
94
-
95
- ## Inference
96
-
97
-
98
- ### VITON-HD
99
-
100
- Inference using python file with arguments,
101
-
102
- ```
103
- accelerate launch inference.py \
104
- --width 768 --height 1024 --num_inference_steps 30 \
105
- --output_dir "result" \
106
- --unpaired \
107
- --data_dir "DATA_DIR" \
108
- --seed 42 \
109
- --test_batch_size 2 \
110
- --guidance_scale 2.0
111
- ```
112
-
113
- or, you can simply run with the script file.
114
-
115
- ```
116
- sh inference.sh
117
- ```
118
-
119
- ### DressCode
120
-
121
- For DressCode dataset, put the category you want to generate images via category argument,
122
- ```
123
- accelerate launch inference_dc.py \
124
- --width 768 --height 1024 --num_inference_steps 30 \
125
- --output_dir "result" \
126
- --unpaired \
127
- --data_dir "DATA_DIR" \
128
- --seed 42
129
- --test_batch_size 2
130
- --guidance_scale 2.0
131
- --category "upper_body"
132
- ```
133
-
134
- or, you can simply run with the script file.
135
- ```
136
- sh inference.sh
137
- ```
138
-
139
-
140
- ## Acknowledgements
141
-
142
- For the [demo](https://huggingface.co/spaces/yisol/IDM-VTON), GPUs are supported from [ZeroGPU](https://huggingface.co/zero-gpu-explorers), and masking generation codes are based on [OOTDiffusion](https://github.com/levihsu/OOTDiffusion) and [DCI-VTON](https://github.com/bcmi/DCI-VTON-Virtual-Try-On).
143
-
144
- Parts of our code are based on [IP-Adapter](https://github.com/tencent-ailab/IP-Adapter).
145
-
146
-
147
-
148
- ## Citation
149
- ```
150
- @article{choi2024improving,
151
- title={Improving Diffusion Models for Virtual Try-on},
152
- author={Choi, Yisol and Kwak, Sangkyung and Lee, Kyungmin and Choi, Hyungwon and Shin, Jinwoo},
153
- journal={arXiv preprint arXiv:2403.05139},
154
- year={2024}
155
- }
156
- ```
157
-
158
- ## License
159
- The codes and checkpoints in this repository are under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
160
-
161
-
162
-
 
1
+ ---
2
+ title: AILUSION VTON DEMO V1
3
+ emoji: Demo
4
+ colorForm: yellow
5
+ colorTo: green
6
+ sdk: gradio
7
+ sdk_version: 4.28.2
8
+ app_file: app.py
9
+ pinned: false
 
 
 
 
 
 
 
10
  ---
11
 
12
+ AILUSION V1 DEMO Virtual Try ON