metadata
license: mit
tags:
- code
size_categories:
- 1B<n<10B
Since we did not find a unified mirror dataset, most studies use self-collected or rendered objects as data. Referring to related research such as 3DGS-DR, Gaussian-Shader and RF-NeRF, I made a specular dataset containing about 60,000 images from 23 scenes, with a total size of 4.23GB, covering the real world and the objects with specular materials synthesized through Blender. And COLMAP is used for point cloud rendering.