Changyao commited on
Commit
85d5a6c
1 Parent(s): 1e4deb5

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +136 -3
README.md CHANGED
@@ -1,3 +1,136 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ADDP
2
+ The official implementation of the [paper](https://arxiv.org/abs/2306.05423) "ADDP: Learning General Representations for Image Recognition and Generation with Alternating Denoising Diffusion Process" (ICLR 2024).
3
+
4
+ ## Abstract
5
+
6
+ Image recognition and generation have long been developed independently of each other. With the recent trend towards general-purpose representation learning, the development of general representations for both recognition and generation tasks is also promoted. However, preliminary attempts mainly focus on generation performance, but are still inferior on recognition tasks. These methods are modeled in the vector-quantized (VQ) space, whereas leading recognition methods use pixels as inputs. Our key insights are twofold: (1) pixels as inputs are crucial for recognition tasks; (2) VQ tokens as reconstruction targets are beneficial for generation tasks. These observations motivate us to propose an Alternating Denoising Diffusion Process (ADDP) that integrates these two spaces within a single representation learning framework. In each denoising step, our method first decodes pixels from previous VQ tokens, then generates new VQ tokens from the decoded pixels. The diffusion process gradually masks out a portion of VQ tokens to construct the training samples. The learned representations can be used to generate diverse high-fidelity images and also demonstrate excellent transfer performance on recognition tasks. Extensive experiments show that our method achieves competitive performance on unconditional generation, ImageNet classification, COCO detection, and ADE20k segmentation. Importantly, our method represents the first successful development of general representations applicable to both generation and dense recognition tasks.
7
+
8
+
9
+
10
+ ## Method
11
+
12
+ <p align="center"><img width="80%" alt="image" src="./figures/training_pipeline.png"></p>
13
+
14
+ <p align="center"><img width="50%" alt="image" src="./figures/inference_pipeline.png"></p>
15
+
16
+
17
+
18
+ ## Citation
19
+
20
+ If this work is helpful for your research, please consider citing the following BibTeX entry.
21
+
22
+ ```
23
+ @article{tian2023addp,
24
+ title={Addp: Learning general representations for image recognition and generation with alternating denoising diffusion process},
25
+ author={Tian, Changyao and Tao, Chenxin and Dai, Jifeng and Li, Hao and Li, Ziheng and Lu, Lewei and Wang, Xiaogang and Li, Hongsheng and Huang, Gao and Zhu, Xizhou},
26
+ journal={arXiv preprint arXiv:2306.05423},
27
+ year={2023}
28
+ }
29
+ ```
30
+
31
+
32
+
33
+ ## Setup
34
+
35
+ Step 1, download [ImageNet](http://image-net.org/download) dataset, and place it in your `IMAGENET_DIR`.
36
+
37
+ Step 2, clone the repository and use pip to install all required packages.
38
+
39
+ ```
40
+ git clone https://github.com/ChangyaoTian/ADDP.git
41
+ cd ADDP
42
+ pip install -r requirements.txt
43
+ ```
44
+
45
+ Step 3, download the pre-trained VQGAN tokenizer and token predictor and put them under the `./exp/pretrained_model` directory.
46
+
47
+ | VQGAN tokenizer | Token Predictor (ViT-Base) | Token Predictor (ViT-Large) |
48
+ | :----------------------------------------------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: |
49
+ | <a href="https://drive.google.com/file/d/13S_unB87n6KKuuMdyMnyExW0G1kplTbP/view?usp=sharing">this link</a> | <a href="https://drive.google.com/file/d/1Q6tbt3vF0bSrv5sPrjpFu8ksG3vTsVX2/view?usp=sharing">this link</a> | <a href="https://drive.google.com/file/d/15xBPa8EIa0IRUiRYtXiYOC9JZVyMIFrB/view?usp=sharing">this link</a> |
50
+
51
+
52
+
53
+ ## Usage
54
+
55
+ The following table provides the performance and weights of the pre-trained checkpoints (ViT-L/16 and ViT-B/16) used in the paper.
56
+
57
+ | | ViT-Large | ViT-Base |
58
+ | ---------------------------------- | ---- | -------- |
59
+ | Checkpoint | | |
60
+ | Class-unconditional Generation FID | 7.6 | 8.9 |
61
+ | Class-unconditional Generation IS | 105.1 | 95.3 |
62
+ | Fine-tuning Top-1 Accuracy | 85.9 | 83.9 |
63
+ | COCO APbox | 54.6 | 51.7 |
64
+ | ADE20k mIoU | 54.3 | 48.1 |
65
+
66
+ ### Pre-training & Fine-tuning & Unconditional Generation
67
+
68
+ The following scripts are all conducted by default under the slurm distributed environment, feel free to change the environment settings by yourself. Please refer to our paper for detailed configurations of each task.
69
+
70
+ For Fine-tuning and generation, please first download the corresponding pre-trained checkpoint (mentioned in the table above) under the `./exp/release` directory.
71
+
72
+ #### ViT-Large
73
+
74
+ ```bash
75
+ ## pretrain
76
+ bash configs/release/large/pretrain_addp_large_800ep.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION}
77
+
78
+ ## finetune
79
+ bash configs/release/large/finetune_addp_large_50ep.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION} exp/release/addp-vit-large-16.pth
80
+
81
+ ## generate
82
+
83
+ ### cosine schedule (default)
84
+ bash configs/release/large/generate_addp_large_steps20.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION} exp/release/addp-vit-large-16.pth
85
+
86
+ ### linear schedule
87
+ bash configs/release/large/generate_addp_large_steps256_linear.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION} exp/release/addp-vit-large-16.pth
88
+
89
+ ```
90
+
91
+
92
+
93
+ #### ViT-Base
94
+
95
+ ```bash
96
+ ## pretrain
97
+ bash configs/release/base/pretrain_addp_base_1600ep.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION}
98
+
99
+ ## finetune
100
+ bash configs/release/base/finetune_addp_base_100ep.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION} exp/release/addp-vit-base-16.pth
101
+
102
+
103
+ ## generate
104
+ bash configs/release/base/generate_addp_base_steps20.sh ${GPUS} ${GPUS_PER_NODE} ${JOB_NAME} ${QUOTATYPE} ${PARATITION} exp/release/addp-vit-base-16.pth
105
+ ```
106
+
107
+
108
+
109
+ #### FID/IS Evaluation
110
+
111
+ We mainly follow [MAGE](https://github.com/LTH14/mage) for FID/IS Evaluation. Please first generate 256x256 ImageNet validation images using
112
+
113
+ ```
114
+ python ./util/prepare_imagenet_val.py --data_path ${IMAGENET_DIR} --output_dir ${IMAGENET256X256_DIR}
115
+ ```
116
+
117
+ Then use pip to install the <a href="https://github.com/toshas/torch-fidelity">torch-fidelity</a> package
118
+ ```
119
+ pip install torch-fidelity
120
+ ```
121
+
122
+ Then use the above package to evaluate FID/IS of the images generated by our models against 256x256 ImageNet validation images by
123
+ ```
124
+ fidelity --isc --fid --input1 ${GENERATED_IMAGES_DIR} --input2 ${IMAGENET256X256_DIR}
125
+ ```
126
+
127
+
128
+
129
+
130
+ ## License
131
+
132
+ This repository is released under the Apache 2.0 license as found in the [LICENSE](LICENSE.md) file.
133
+
134
+ ## Contact
135
+
136
+ If you have any questions, feel free to contact me through email (tcyhost@link.cuhk.edu.hk) directly.