File size: 1,285 Bytes
ee0f8ee
1
Recent Diffusion Transformers such as DiT have shown their effectiveness in generating high-quality 2D images. However, it is still not known if the Transformer architecture performs well in 3D shape generation, as previous methods mostly used the U-Net architecture. To address this, we propose a novel Diffusion Transformer for 3D shape generation called DiT-3D. DiT-3D operates the denoising process on voxelized point clouds using plain Transformers. Compared to U-Net approaches, DiT-3D is more scalable in model size and produces higher quality generations. DiT-3D incorporates 3D positional and patch embeddings to adaptively aggregate input from voxelized point clouds. To reduce the computational cost of self-attention in 3D shape generation, we use 3D window attention in Transformer blocks. Linear and devoxelization layers are used to predict the denoised point clouds. Our transformer architecture also supports efficient fine-tuning from 2D to 3D using a pre-trained DiT-2D checkpoint on ImageNet. Experimental results on the ShapeNet dataset show that DiT-3D achieves state-of-the-art performance in high-fidelity and diverse 3D point cloud generation. Notably, DiT-3D improves the 1-Nearest Neighbor Accuracy and the Coverage metric when evaluated on Chamfer Distance.