flamehaze1115 commited on
Commit
4180a79
•
1 Parent(s): ed38ed4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -69
README.md CHANGED
@@ -1,69 +1,13 @@
1
- # Wonder3D
2
- Single Image to 3D using Cross-Domain Diffusion
3
- ## [Paper](https://arxiv.org/abs/2310.15008) | [Project page](https://www.xxlong.site/Wonder3D/)
4
-
5
- ![](assets/fig_teaser.png)
6
-
7
- Wonder3D reconstructs highly-detailed textured meshes from a single-view image in only 2 ∼ 3 minutes. Wonder3D first generates consistent multi-view normal maps with corresponding color images via a cross-domain diffusion model, and then leverages a novel normal fusion method to achieve fast and high-quality reconstruction.
8
-
9
- ## Schedule
10
- - [x] Inference code and pretrained models.
11
- - [ ] Huggingface demo.
12
- - [ ] Training code.
13
- - [ ] Rendering code for data prepare.
14
-
15
-
16
- ### Preparation for inference
17
- 1. Install packages in `requirements.txt`.
18
- ```angular2html
19
- conda create -n wonder3d
20
- conda activate wonder3d
21
- pip install -r requirements.txt
22
- ```
23
- 2. Download the [checkpoints](https://connecthkuhk-my.sharepoint.com/:f:/g/personal/xxlong_connect_hku_hk/EgSHPyJAtaJFpV_BjXM3zXwB-UMIrT4v-sQwGgw-coPtIA) into the root folder.
24
-
25
- ### Inference
26
- 1. Make sure you have the following models.
27
- ```bash
28
- Wonder3D
29
- |-- ckpts
30
- |-- unet
31
- |-- scheduler.bin
32
- ...
33
- ```
34
- 2. Predict foreground mask as the alpha channel. We use [Clipdrop](https://clipdrop.co/remove-background) to segment the foreground object interactively.
35
- You may also use `rembg` to remove the backgrounds.
36
- ```bash
37
- # !pip install rembg
38
- import rembg
39
- result = rembg.remove(result)
40
- result.show()
41
- ```
42
- 3. Run Wonder3d to produce multiview-consistent normal maps and color images. Then you can check the results in the folder `./outputs`. (we use rembg to remove backgrounds of the results, but the segmemtations are not always perfect.)
43
- ```bash
44
- accelerate launch --config_file 1gpu.yaml test_mvdiffusion_seq.py \
45
- --config mvdiffusion-joint-ortho-6views.yaml
46
- ```
47
- or
48
- ```bash
49
- bash run_test.sh
50
- ```
51
-
52
- 4. Mesh Extraction
53
- ```bash
54
- cd ./instant-nsr-pl
55
- bash run.sh output_folder_path scene_name
56
- ```
57
-
58
- ## Citation
59
- If you find this repository useful in your project, please cite the following work. :)
60
- ```
61
- @misc{long2023wonder3d,
62
- title={Wonder3D: Single Image to 3D using Cross-Domain Diffusion},
63
- author={Xiaoxiao Long and Yuan-Chen Guo and Cheng Lin and Yuan Liu and Zhiyang Dou and Lingjie Liu and Yuexin Ma and Song-Hai Zhang and Marc Habermann and Christian Theobalt and Wenping Wang},
64
- year={2023},
65
- eprint={2310.15008},
66
- archivePrefix={arXiv},
67
- primaryClass={cs.CV}
68
- }
69
- ```
 
1
+ ---
2
+ title: Wonder3D
3
+ emoji: 🚀
4
+ colorFrom: indigo
5
+ colorTo: pink
6
+ sdk: gradio
7
+ sdk_version: 3.43.2
8
+ app_file: app.py
9
+ pinned: false
10
+ license: cc-by-sa-3.0
11
+ ---
12
+
13
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference