An official implementation of PRM, a feed-forward framework for high-quality 3D mesh generation with photometric stereo images.
π© Features
- Release inference and training code.
- Release model weights.
- Release huggingface gradio demo. Please try it at demo link.
- Release ComfyUI demo.
βοΈ Dependencies and Installation
We recommend using Python>=3.10
, PyTorch>=2.1.0
, and CUDA>=12.1
.
conda create --name PRM python=3.10
conda activate PRM
pip install -U pip
# Ensure Ninja is installed
conda install Ninja
# Install the correct version of CUDA
conda install cuda -c nvidia/label/cuda-12.1.0
# Install PyTorch and xformers
# You may need to install another xformers version if you use a different PyTorch version
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.22.post7
# Install Triton
pip install triton
# Install other requirements
pip install -r requirements.txt
π« Inference
Download the pretrained model
The pretrained model can be found model card.
Our inference script will download the models automatically. Alternatively, you can manually download the models and put them under the ckpts/
directory.
π» Training
We provide our training code to facilitate future research. For training data, we used filtered Objaverse for training. Before training, you need to pre-processe the environment maps and GLB files into formats that fit our dataloader. For preprocessing GLB files, please run
# GLB files to OBJ files
python train.py --base configs/instant-mesh-large-train.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
then
# OBJ files to mesh files that can be readed
python obj2mesh.py path_to_obj save_path
For preprocessing environment maps, please run
# Pre-process environment maps
python light2map.py path_to_env save_path
To train the sparse-view reconstruction models, please run:
# Training on Mesh representation
python train.py --base configs/PRM.yaml --gpus 0,1,2,3,4,5,6,7 --num_nodes 1
Note that you need to change to root_dir and light_dir to pathes that you save the preprocessed GLB files and environment maps.
:books: Citation
If you find our work useful for your research or applications, please cite using this BibTeX:
@article{xu2024instantmesh,
title={InstantMesh: Efficient 3D Mesh Generation from a Single Image with Sparse-view Large Reconstruction Models},
author={Xu, Jiale and Cheng, Weihao and Gao, Yiming and Wang, Xintao and Gao, Shenghua and Shan, Ying},
journal={arXiv preprint arXiv:2404.07191},
year={2024}
}
π€ Acknowledgements
We thank the authors of the following projects for their excellent contributions to 3D generative AI!