Commit
•
b8dc849
1
Parent(s):
3b4071e
Add Model Card (#1)
Browse files- Add Model Card (d8aee33ce49caf30638ff202e924e607b4d5a341)
Co-authored-by: Dylan Ebert <dylanebert@users.noreply.huggingface.co>
README.md
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: image-to-3d
|
3 |
+
---
|
4 |
+
|
5 |
+
<p align="center">
|
6 |
+
<h3 align="center"><strong>MeshAnything:<br> Artist-Created Mesh Generation<br> with Autoregressive Transformers</strong></h3>
|
7 |
+
|
8 |
+
<p align="center">
|
9 |
+
<a href="https://buaacyw.github.io/">Yiwen Chen</a><sup>1,2*</sup>,
|
10 |
+
<a href="https://tonghe90.github.io/">Tong He</a><sup>2†</sup>,
|
11 |
+
<a href="https://dihuang.me/">Di Huang</a><sup>2</sup>,
|
12 |
+
<a href="https://ywcmaike.github.io/">Weicai Ye</a><sup>2</sup>,
|
13 |
+
<a href="https://ch3cook-fdu.github.io/">Sijin Chen</a><sup>3</sup>,
|
14 |
+
<a href="https://me.kiui.moe/">Jiaxiang Tang</a><sup>4</sup><br>
|
15 |
+
<a href="https://chenxin.tech/">Xin Chen</a><sup>5</sup>,
|
16 |
+
<a href="https://caizhongang.github.io/">Zhongang Cai</a><sup>6</sup>,
|
17 |
+
<a href="https://scholar.google.com.hk/citations?user=jZH2IPYAAAAJ&hl=en">Lei Yang</a><sup>6</sup>,
|
18 |
+
<a href="https://www.skicyyu.org/">Gang Yu</a><sup>7</sup>,
|
19 |
+
<a href="https://guosheng.github.io/">Guosheng Lin</a><sup>1†</sup>,
|
20 |
+
<a href="https://icoz69.github.io/">Chi Zhang</a><sup>8†</sup>
|
21 |
+
<br>
|
22 |
+
<sup>*</sup>Work done during a research internship at Shanghai AI Lab.
|
23 |
+
<br>
|
24 |
+
<sup>†</sup>Corresponding authors.
|
25 |
+
<br>
|
26 |
+
<sup>1</sup>S-Lab, Nanyang Technological University,
|
27 |
+
<sup>2</sup>Shanghai AI Lab,
|
28 |
+
<br>
|
29 |
+
<sup>3</sup>Fudan University,
|
30 |
+
<sup>4</sup>Peking University,
|
31 |
+
<sup>5</sup>University of Chinese Academy of Sciences,
|
32 |
+
<br>
|
33 |
+
<sup>6</sup>SenseTime Research,
|
34 |
+
<sup>7</sup>Stepfun,
|
35 |
+
<sup>8</sup>Westlake University
|
36 |
+
</p>
|
37 |
+
|
38 |
+
|
39 |
+
## Release
|
40 |
+
- [6/17] 🔥🔥 We released the 350m version of **MeshAnything**.
|
41 |
+
|
42 |
+
## Contents
|
43 |
+
- [Release](#release)
|
44 |
+
- [Contents](#contents)
|
45 |
+
- [Installation](#installation)
|
46 |
+
- [Usage](#usage)
|
47 |
+
- [Important Notes](#important-notes)
|
48 |
+
- [TODO](#todo)
|
49 |
+
- [Acknowledgement](#acknowledgement)
|
50 |
+
- [BibTeX](#bibtex)
|
51 |
+
|
52 |
+
## Installation
|
53 |
+
Our environment has been tested on Ubuntu 22, CUDA 11.8 with A100, A800 and A6000.
|
54 |
+
1. Clone our repo and create conda environment
|
55 |
+
```
|
56 |
+
git clone https://github.com/buaacyw/MeshAnything.git && cd MeshAnything
|
57 |
+
conda create -n MeshAnything python==3.10.13
|
58 |
+
conda activate MeshAnything
|
59 |
+
pip install torch==2.1.1 torchvision==0.16.1 torchaudio==2.1.1 --index-url https://download.pytorch.org/whl/cu118
|
60 |
+
pip install -r requirements.txt
|
61 |
+
pip install flash-attn --no-build-isolation
|
62 |
+
```
|
63 |
+
|
64 |
+
## Usage
|
65 |
+
### Local Gradio Demo <a href='https://github.com/gradio-app/gradio'><img src='https://img.shields.io/github/stars/gradio-app/gradio'></a>
|
66 |
+
```
|
67 |
+
python app.py
|
68 |
+
```
|
69 |
+
|
70 |
+
### Mesh Command line inference
|
71 |
+
```
|
72 |
+
# folder input
|
73 |
+
python main.py --input_dir examples --out_dir mesh_output --input_type mesh
|
74 |
+
|
75 |
+
# single file input
|
76 |
+
python main.py --input_path examples/wand.ply --out_dir mesh_output --input_type mesh
|
77 |
+
|
78 |
+
# Preprocess with Marching Cubes first
|
79 |
+
python main.py --input_dir examples --out_dir mesh_output --input_type mesh --mc
|
80 |
+
```
|
81 |
+
### Point Cloud Command line inference
|
82 |
+
```
|
83 |
+
# Note: if you want to use your own point cloud, please make sure the normal is included.
|
84 |
+
# The file format should be a .npy file with shape (N, 6), where N is the number of points. The first 3 columns are the coordinates, and the last 3 columns are the normal.
|
85 |
+
|
86 |
+
# inference for folder
|
87 |
+
python main.py --input_dir pc_examples --out_dir pc_output --input_type pc_normal
|
88 |
+
|
89 |
+
# inference for single file
|
90 |
+
python main.py --input_dir pc_examples/mouse.npy --out_dir pc_output --input_type pc_normal
|
91 |
+
```
|
92 |
+
|
93 |
+
## Important Notes
|
94 |
+
- It takes about 7GB and 30s to generate a mesh on an A6000 GPU.
|
95 |
+
- The input mesh will be normalized to a unit bounding box. The up vector of the input mesh should be +Y for better results.
|
96 |
+
- Limited by computational resources, MeshAnything is trained on meshes with fewer than 800 faces and cannot generate meshes with more than 800 faces. The shape of the input mesh should be sharp enough; otherwise, it will be challenging to represent it with only 800 faces. Thus, feed-forward image-to-3D methods may often produce bad results due to insufficient shape quality. We suggest using results from 3D reconstruction, scanning and sds-based method (like DreamCraft3D) as the input of MeshAnything.
|
97 |
+
- Please refer to https://huggingface.co/spaces/Yiwen-ntu/MeshAnything/tree/main/examples for more examples.
|
98 |
+
## TODO
|
99 |
+
|
100 |
+
The repo is still being under construction, thanks for your patience.
|
101 |
+
- [ ] Release of training code.
|
102 |
+
- [ ] Release of larger model.
|
103 |
+
|
104 |
+
## Acknowledgement
|
105 |
+
|
106 |
+
Our code is based on these wonderful repos:
|
107 |
+
|
108 |
+
* [MeshGPT](https://nihalsid.github.io/mesh-gpt/)
|
109 |
+
* [meshgpt-pytorch](https://github.com/lucidrains/meshgpt-pytorch)
|
110 |
+
* [Michelangelo](https://github.com/NeuralCarver/Michelangelo)
|
111 |
+
* [transformers](https://github.com/huggingface/transformers)
|
112 |
+
* [vector-quantize-pytorch](https://github.com/lucidrains/vector-quantize-pytorch)
|
113 |
+
|
114 |
+
## BibTeX
|
115 |
+
```
|
116 |
+
@misc{chen2024meshanything,
|
117 |
+
title={MeshAnything: Artist-Created Mesh Generation with Autoregressive Transformers},
|
118 |
+
author={Yiwen Chen and Tong He and Di Huang and Weicai Ye and Sijin Chen and Jiaxiang Tang and Xin Chen and Zhongang Cai and Lei Yang and Gang Yu and Guosheng Lin and Chi Zhang},
|
119 |
+
year={2024},
|
120 |
+
eprint={2406.10163},
|
121 |
+
archivePrefix={arXiv},
|
122 |
+
primaryClass={cs.CV}
|
123 |
+
}
|
124 |
+
```
|