Spaces:
Runtime error
Runtime error
Work
commited on
Commit
•
1d0838f
1
Parent(s):
ad3d087
update readme for compatability
Browse files
README.md
CHANGED
@@ -1,111 +1,9 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
> [!NOTE]
|
11 |
-
> Thanks to [@yxymessi](https://github.com/yxymessi) and [@florinshen](https://github.com/florinshen), we have found that there is a **severe bug in rotation normalization** [here](https://github.com/3DTopia/LGM/blob/main/core/models.py#L43).
|
12 |
-
> However, the current model is trained with this wrong code and correcting it leads to some quality degradation. We will work on retraining a correct model in the future.
|
13 |
-
>
|
14 |
-
> **If you are going to retrain a model, please correct it to `self.rot_act = lambda x: F.normalize(x, dim=-1)`** !
|
15 |
-
|
16 |
-
### Replicate Demo:
|
17 |
-
* gaussians: [demo](https://replicate.com/camenduru/lgm) | [code](https://github.com/camenduru/LGM-replicate)
|
18 |
-
* mesh: [demo](https://replicate.com/camenduru/lgm-ply-to-glb) | [code](https://github.com/camenduru/LGM-ply-to-glb-replicate)
|
19 |
-
|
20 |
-
Thanks to [@camenduru](https://github.com/camenduru)!
|
21 |
-
|
22 |
-
### Install
|
23 |
-
|
24 |
-
```bash
|
25 |
-
# xformers is required! please refer to https://github.com/facebookresearch/xformers for details.
|
26 |
-
# for example, we use torch 2.1.0 + cuda 11.8
|
27 |
-
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
|
28 |
-
pip install -U xformers --index-url https://download.pytorch.org/whl/cu118
|
29 |
-
|
30 |
-
# a modified gaussian splatting (+ depth, alpha rendering)
|
31 |
-
git clone --recursive https://github.com/ashawkey/diff-gaussian-rasterization
|
32 |
-
pip install ./diff-gaussian-rasterization
|
33 |
-
|
34 |
-
# for mesh extraction
|
35 |
-
pip install git+https://github.com/NVlabs/nvdiffrast
|
36 |
-
|
37 |
-
# other dependencies
|
38 |
-
pip install -r requirements.txt
|
39 |
-
```
|
40 |
-
|
41 |
-
### Pretrained Weights
|
42 |
-
|
43 |
-
Our pretrained weight can be downloaded from [huggingface](https://huggingface.co/ashawkey/LGM).
|
44 |
-
|
45 |
-
For example, to download the fp16 model for inference:
|
46 |
-
```bash
|
47 |
-
mkdir pretrained && cd pretrained
|
48 |
-
wget https://huggingface.co/ashawkey/LGM/resolve/main/model_fp16.safetensors
|
49 |
-
cd ..
|
50 |
-
```
|
51 |
-
|
52 |
-
For [MVDream](https://github.com/bytedance/MVDream) and [ImageDream](https://github.com/bytedance/ImageDream), we use a [diffusers implementation](https://github.com/ashawkey/mvdream_diffusers).
|
53 |
-
Their weights will be downloaded automatically.
|
54 |
-
|
55 |
-
### Inference
|
56 |
-
|
57 |
-
Inference takes about 10GB GPU memory (loading all imagedream, mvdream, and our LGM).
|
58 |
-
|
59 |
-
```bash
|
60 |
-
### gradio app for both text/image to 3D
|
61 |
-
python app.py big --resume pretrained/model_fp16.safetensors
|
62 |
-
|
63 |
-
### test
|
64 |
-
# --workspace: folder to save output (*.ply and *.mp4)
|
65 |
-
# --test_path: path to a folder containing images, or a single image
|
66 |
-
python infer.py big --resume pretrained/model_fp16.safetensors --workspace workspace_test --test_path data_test
|
67 |
-
|
68 |
-
### local gui to visualize saved ply
|
69 |
-
python gui.py big --output_size 800 --test_path workspace_test/saved.ply
|
70 |
-
|
71 |
-
### mesh conversion
|
72 |
-
python convert.py big --test_path workspace_test/saved.ply
|
73 |
-
```
|
74 |
-
|
75 |
-
For more options, please check [options](./core/options.py).
|
76 |
-
|
77 |
-
### Training
|
78 |
-
|
79 |
-
**NOTE**:
|
80 |
-
Since the dataset used in our training is based on AWS, it cannot be directly used for training in a new environment.
|
81 |
-
We provide the necessary training code framework, please check and modify the [dataset](./core/provider_objaverse.py) implementation!
|
82 |
-
|
83 |
-
We also provide the **~80K subset of [Objaverse](https://objaverse.allenai.org/objaverse-1.0)** used to train LGM in [objaverse_filter](https://github.com/ashawkey/objaverse_filter).
|
84 |
-
|
85 |
-
```bash
|
86 |
-
# debug training
|
87 |
-
accelerate launch --config_file acc_configs/gpu1.yaml main.py big --workspace workspace_debug
|
88 |
-
|
89 |
-
# training (use slurm for multi-nodes training)
|
90 |
-
accelerate launch --config_file acc_configs/gpu8.yaml main.py big --workspace workspace
|
91 |
-
```
|
92 |
-
|
93 |
-
### Acknowledgement
|
94 |
-
|
95 |
-
This work is built on many amazing research works and open-source projects, thanks a lot to all the authors for sharing!
|
96 |
-
|
97 |
-
- [gaussian-splatting](https://github.com/graphdeco-inria/gaussian-splatting) and [diff-gaussian-rasterization](https://github.com/graphdeco-inria/diff-gaussian-rasterization)
|
98 |
-
- [nvdiffrast](https://github.com/NVlabs/nvdiffrast)
|
99 |
-
- [dearpygui](https://github.com/hoffstadt/DearPyGui)
|
100 |
-
- [tyro](https://github.com/brentyi/tyro)
|
101 |
-
|
102 |
-
### Citation
|
103 |
-
|
104 |
-
```
|
105 |
-
@article{tang2024lgm,
|
106 |
-
title={LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation},
|
107 |
-
author={Tang, Jiaxiang and Chen, Zhaoxi and Chen, Xiaokang and Wang, Tengfei and Zeng, Gang and Liu, Ziwei},
|
108 |
-
journal={arXiv preprint arXiv:2402.05054},
|
109 |
-
year={2024}
|
110 |
-
}
|
111 |
-
```
|
|
|
1 |
+
title: LGM
|
2 |
+
emoji: 🦀
|
3 |
+
colorFrom: red
|
4 |
+
colorTo: indigo
|
5 |
+
sdk: docker
|
6 |
+
sdk_version: 4.20.1
|
7 |
+
python_version: 3.10.13
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|