# [LumiView: A Synthetic Object Centric Dataset with Multiple View and Lighting](xxxx) [Zhong Li](https://sites.google.com/site/lizhong19900216)1, [Liangchen Song](https://lsongx.github.io/)2, [Zhang Chen](https://zhangchen8.github.io/)1, Xiangyu Du1, [Lele Chen](https://lelechen63.github.io/)1, [Junsong Yuan](https://cse.buffalo.edu/~jsyuan/)2, [Yi Xu](https://www.linkedin.com/in/yi-xu-42654823/)1, 1OPPO US Research Center, 2University at Buffalo

# Introduction > We used Blender’s physically based path tracer renderer and rendered 3 textured objects: `synthetic face`, `wood train`, and `face mask`. We set up 5 × 5 camera views on the front hemisphere, set 105 directional light sources around the full sphere, and render at a resolution of 800 × 800 pixels. Each camera differs by 10 degrees and each light source differs by 25 degrees on the sphere. Please download the dataset from this link: [BlenderData](xxxxx), and put them in the folder `data/BlenderData/`. # Dataset structure Each object has its own folder, and each folder contains the following files: - `cam_data_label.npz`: camera parameters labeled in label['imgname'],label['intrinsic'] ,label['R_bcam'],w2c_T = label['T_bcam'],l_dirs = label['light_dir'], please refer to the github code[link](xxxxx) for more details. - `xx_xx_xx_xx.png`: view and light index images. ``` root/ |-- FaceBase/ | |-- xx_xx_xx_xx.png # view and light index | |-- cam_data_label.npz |-- facecover/ |-- toytrain/ ``` # Citation ``` @inproceedings{li2023relitneulf, title={Relit-NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field}, author={Li, Zhong, Song, Liangchen, Chen, Zhang, Du, Xiangyu, Chen, Lele, Yuan, Junsong, Xu, Yi}, booktitle={Proceedings of the 31th ACM International Conference on Multimedia}, year={2023} } ```