Abstract

We present a new deep point cloud rendering pipeline through multi-plane projections. The input to the network is the raw point cloud of a scene and the output are image or image sequences from a novel view or along a novel camera trajectory. Unlike previous approaches that directly project features from 3D points onto 2D image domain, we propose to project these features into a layered volume of camera frustum. In this way, the visibility of 3D points can be automatically learnt by the network, such that ghosting effects due to false visibility check as well as occlusions caused by noise interferences are both avoided successfully. Next, the 3D feature volume is fed into a 3D CNN to produce multiple layers of images w.r.t. the space division in the depth directions. The layered images are then blended based on learned weights to produce the final rendering results. Experiments show that our network produces more stable renderings compared to previous methods, especially near the object boundaries. Moreover, our pipeline is robust to noisy and relatively sparse point cloud for a variety of challenging scenes.

Comment: 17 page


Original document

The different versions of the original document can be found in:

http://dx.doi.org/10.1109/cvpr42600.2020.00785
https://openaccess.thecvf.com/content_CVPR_2020/papers/Dai_Neural_Point_Cloud_Rendering_via_Multi-Plane_Projection_CVPR_2020_paper.pdf,
https://arxiv.org/pdf/1912.04645.pdf,
https://openaccess.thecvf.com/content_CVPR_2020/html/Dai_Neural_Point_Cloud_Rendering_via_Multi-Plane_Projection_CVPR_2020_paper.html,
https://arxiv.org/pdf/1912.04645,
http://www.arxiv-vanity.com/papers/1912.04645,
https://academic.microsoft.com/#/detail/3035318263
Back to Top

Document information

Published on 01/01/2019

Volume 2019, 2019
DOI: 10.1109/cvpr42600.2020.00785
Licence: CC BY-NC-SA license

Document Score

0

Views 2
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?