zhendongw commited on
Commit
fa60240
1 Parent(s): 83fc1e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -51
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ## Prompt-Diffusion: In-Context Learning Unlocked for Diffusion Models
2
- ### [Project Page](https://zhendong-wang.github.io/prompt-diffusion.github.io/) | [Paper](https://arxiv.org/abs/2305.01115)
3
  ![Illustration](./assets/teaser_img.png)
4
 
5
  **In-Context Learning Unlocked for Diffusion Models**<br>
@@ -18,56 +18,8 @@ Our model also shows compelling text-guided image editing results. Our framework
18
 
19
  ![Illustration](./assets/illustration.png)
20
 
21
- ## ToDos
22
- - [x] Release pretrained models
23
- - [x] Release play-around codes
24
-
25
-
26
- ## Results
27
- ### Multi-Task Learning
28
-
29
- ![Illustration](./assets/multi_task_results.png)
30
-
31
- ### Generalization to New Tasks
32
-
33
- ![Illustration](./assets/generalization_results.png)
34
-
35
- ### Image Editing Ability
36
-
37
- ![Illustration](./assets/edit_results.png)
38
-
39
- ## Train Prompt Diffusion
40
-
41
- ### Prepare Dataset
42
-
43
- We use the public dataset proposed by [InstructPix2Pix](https://github.com/timothybrooks/instruct-pix2pix) as our base dataset,
44
- which consists of around 310k image-caption pairs. Furthermore, we apply the [ControlNet](https://github.com/lllyasviel/ControlNet) annotators
45
- to collect image conditions such as HED/Depth/Segmentation maps of images. The code for collecting image conditions is provided in `annotate_data.py`.
46
-
47
- ### Training
48
-
49
- Training a Prompt Diffusion is as easy as follows,
50
-
51
- ```.bash
52
- python tool_add_control.py 'path to your stable diffusion checkpoint, e.g., /.../v1-5-pruned-emaonly.ckpt' ./models/control_sd15_ini.ckpt
53
-
54
- python train.py --name 'experiment name' --gpus=8 --num_nodes=1 \
55
- --logdir 'your logdir path' \
56
- --data_config './models/dataset.yaml' --base './models/cldm_v15.yaml' \
57
- --sd_locked
58
- ```
59
-
60
- We also provide the job script in `scripts/train_v1-5.sh` for an easy run.
61
-
62
- ## Run Prompt Diffusion from our checkpoints
63
-
64
- We will update the code for playing Prompt Diffusion and the model checkpoints soon.
65
-
66
- ## More Examples
67
-
68
- ![Illustration](./assets/more_example_depth.png)
69
- ![Illustration](./assets/more_example_hed.png)
70
- ![Illustration](./assets/more_example_seg.png)
71
 
72
 
73
  ## Citation
 
1
  ## Prompt-Diffusion: In-Context Learning Unlocked for Diffusion Models
2
+ [Project Page](https://zhendong-wang.github.io/prompt-diffusion.github.io/) | [Paper](https://arxiv.org/abs/2305.01115) | [GitHub](https://github.com/Zhendong-Wang/Prompt-Diffusion)
3
  ![Illustration](./assets/teaser_img.png)
4
 
5
  **In-Context Learning Unlocked for Diffusion Models**<br>
 
18
 
19
  ![Illustration](./assets/illustration.png)
20
 
21
+ ## Note
22
+ We have made our pretrained model checkpoints available here. For more information on how to use them, please visit our GitHub page at https://github.com/Zhendong-Wang/Prompt-Diffusion.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
 
25
  ## Citation