Abstract
Indirect methods for visual SLAM are gaining popularity due to their robustness to environmental variations. ORB-SLAM2 [1] is a benchmark method in this domain, however, it consumes significant time for computing descriptors that never get reused unless a frame is selected as a keyframe. To overcome these problems, we present FastORB-SLAM which is light-weight and efficient as it tracks keypoints between adjacent frames without computing descriptors. To achieve this, a two stage descriptor-independent keypoint matching method is proposed based on sparse optical flow. In the first stage, we predict initial keypoint correspondences via a simple but effective motion model and then robustly establish the correspondences via pyramid-based sparse optical flow tracking. In the second stage, we leverage the constraints of the motion smoothness and epipolar geometry to refine the correspondences. In particular, our method computes descriptors only for keyframes. We test FastORB-SLAM on TUM and ICL-NUIM RGB-D datasets and compare its accuracy and efficiency to nine existing RGB-D SLAM methods. Qualitative and quantitative results show that our method achieves state-of-the-art accuracy and is about twice as fast as the ORB-SLAM2.
Original language | English |
---|---|
Pages (from-to) | 1433-1446 |
Number of pages | 14 |
Journal | IEEE Transactions on Image Processing |
Volume | 31 |
Early online date | 24 Dec 2021 |
DOIs | |
Publication status | Published - 2022 |