VAPCNet: Viewpoint-Aware 3D Point Cloud Completion

Zhiheng Fu, Longguang Wang, Lian Xu, Zhiyong Wang, Hamid Laga, Yulan Guo, Farid Boussaid, Mohammed Bennamoun

Research output: Chapter in Book/Conference paperConference paperpeer-review

15 Downloads (Pure)


Most existing learning-based 3D point cloud completion methods ignore the fact that the completion process is highly coupled with the viewpoint of a partial scan. However, the various viewpoints of incompletely scanned objects in real-world applications are normally unknown and directly estimating the viewpoint of each incomplete object is usually time-consuming and leads to huge annotation cost. In this paper, we thus propose an unsupervised viewpoint representation learning scheme for 3D point cloud completion without explicit viewpoint estimation. To be specific, we learn abstract representations of partial scans to distinguish various viewpoints in the representation space rather than the explicit estimation in the 3D space. We also introduce a Viewpoint-Aware Point cloud Completion Network (VAPCNet) with flexible adaption to various viewpoints based on the learned representations. The proposed viewpoint representation learning scheme can extract discriminative representations to obtain accurate viewpoint information. Reported experiments on two popular public datasets show that our VAPCNet achieves state-of-the-art performance for the point cloud completion task. Source code is available at https://github. com/FZH92128/VAPCNet.
Original languageEnglish
Title of host publicationProceedings of the IEEE/CVF International Conference on Computer Vision
PublisherIEEE, Institute of Electrical and Electronics Engineers
Number of pages11
Publication statusPublished - 2023
Event2023 International Conference on Computer Vision: ICCV2023 - Paris Convention Center, Paris, France
Duration: 4 Oct 20236 Oct 2023


Conference2023 International Conference on Computer Vision
Internet address

Cite this