SLFNet: A Stereo and LiDAR Fusion Network for Depth Completion

Yongjian Zhang, Longguang Wang, Kunhong Li, Zhiheng Fu, Yulan Guo

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)


Acquiring dense and precise depth information in real time is highly demanded for robotic perception and automatic driving. Motivated by the complementary nature of stereo images and LiDAR point clouds, we propose an efficient stereo-LiDAR fusion network (SLFNet) to predict a dense depth map of a scene. Specifically, the LiDAR point cloud is first projected onto each image plane of the stereo images to generate sparse RGB-D maps. Then, multi-modal feature fusion is performed between RGB image and sparse RGB-D map of the same viewpoint, and the resultant features are utilized to generate a coarse disparity map for stereo fusion. Next, complementary geometric information in stereo images and sparse RGB-D maps are incorporated to perform occlusion-aware refinement. Finally, an edge-aware refinement module is conducted to encourage the depth discontinuities to be consistent with edges in the image. Experimental results demonstrate that our network can effectively fuse the stereo images and point clouds to produce accurate depth estimations at 6 FPS, which is 8× faster than existing methods. Comparative results show that our network achieves the state-of-the-art performance on the KITTI and Virtual KITTI2 datasets.

Original languageEnglish
Pages (from-to)10605-10612
Number of pages8
JournalIEEE Robotics and Automation Letters
Issue number4
Publication statusPublished - 1 Oct 2022


Dive into the research topics of 'SLFNet: A Stereo and LiDAR Fusion Network for Depth Completion'. Together they form a unique fingerprint.

Cite this