gwang-kim commited on
Commit
aea6c8d
1 Parent(s): 524e00f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -4
README.md CHANGED
@@ -6,10 +6,13 @@ license: apache-2.0
6
  CVPR 2023 <br>
7
  [gwang-kim.github.io/datid_3d](gwang-kim.github.io/datid_3d/)
8
 
9
- **Abstract**: <br>
10
- Recent 3D generative models have achieved remarkable performance in synthesizing high resolution photorealistic images with view consistency and detailed 3D shapes, but training them for diverse domains is challenging since it requires massive training images and their camera distribution information.
11
- Text-guided domain adaptation methods have shown impressive performance on converting the 2D generative model on one domain into the models on other domains with different styles by leveraging the CLIP (Contrastive Language-Image Pre-training), rather than collecting massive datasets for those domains. However, one drawback of them is that the sample diversity in the original generative model is not well-preserved in the domain-adapted generative models due to the deterministic nature of the CLIP text encoder. Text-guided domain adaptation will be even more challenging for 3D generative models not only because of catastrophic diversity loss, but also because of inferior text-image correspondence and poor image quality.
12
- **Here we propose DATID-3D, a novel pipeline of text-guided domain adaptation tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain.** Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text.
 
 
 
13
 
14
 
15
  ## Citation
@@ -22,3 +25,5 @@ Text-guided domain adaptation methods have shown impressive performance on conve
22
  year = {2023}
23
  }
24
  ```
 
 
 
6
  CVPR 2023 <br>
7
  [gwang-kim.github.io/datid_3d](gwang-kim.github.io/datid_3d/)
8
 
9
+
10
+ We propose DATID-3D, a novel pipeline of text-guided domain adaptation tailored for 3D generative models using text-to-image diffusion models that can synthesize diverse images per text prompt without collecting additional images and camera information for the target domain.** Unlike 3D extensions of prior text-guided domain adaptation methods, our novel pipeline was able to fine-tune the state-of-the-art 3D generator of the source domain to synthesize high resolution, multi-view consistent images in text-guided targeted domains without additional data, outperforming the existing text-guided domain adaptation methods in diversity and text-image correspondence. Furthermore, we propose and demonstrate diverse 3D image manipulations such as one-shot instance-selected adaptation and single-view manipulated 3D reconstruction to fully enjoy diversity in text.
11
+
12
+ ## Fine-tuned 3D generative models
13
+
14
+ Fine-tuned 3D generative models using DATID-3D pipeline are stored as `*.pkl` files.
15
+ You can download the models in [our Hugginface model pages](https://huggingface.co/gwang-kim/datid3d-finetuned-eg3d-models/tree/main/finetuned_models).
16
 
17
 
18
  ## Citation
 
25
  year = {2023}
26
  }
27
  ```
28
+
29
+ ========================================================