NimaBoscarino commited on
Commit
a11233a
1 Parent(s): 60a4030

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pytorch
3
+ tags:
4
+ - diffusion
5
+ - image-to-image
6
+ ---
7
+
8
+ # DiffusionCLIP: Text-Guided Diffusion Models for Robust Image Manipulation - Bedrooms
9
+
10
+ Creators: Gwanghyun Kim, Taesung Kwon, Jong Chul Ye
11
+
12
+ <img src="https://github.com/submission10095/DiffusionCLIP_temp/raw/master/imgs/main1.png" alt="Excerpt from DiffusionCLIP paper showcasing comparison of DiffusionCLIP versus other methods for image reconstruction, manipulation, and style transfer." style="height: 300px;"/>
13
+
14
+ DiffusionCLIP is a diffusion model which is well suited for image manipulation thanks to its nearly perfect inversion capability, which is an important advantage over GAN-based models. This checkpoint was trained on the [CelebA-HQ Dataset](https://arxiv.org/abs/1710.10196), available on the Hugging Face Hub: https://huggingface.co/datasets/huggan/CelebA-HQ.
15
+
16
+ This checkpoint is most appropriate for manipulation, reconstruction, and style transfer on images of indoor locations, such as bedrooms. The weights should be loaded into the [DiffusionCLIP model](https://github.com/gwang-kim/DiffusionCLIP).
17
+
18
+ ### Credits
19
+
20
+ - Code repository available at: https://github.com/gwang-kim/DiffusionCLIP
21
+
22
+ ### Citation
23
+
24
+ ```
25
+ @article{kim2021diffusionclip,
26
+ title={Diffusionclip: Text-guided image manipulation using diffusion models},
27
+ author={Kim, Gwanghyun and Ye, Jong Chul},
28
+ journal={arXiv preprint arXiv:2110.02711},
29
+ year={2021}
30
+ }
31
+ ```