Fast ORB-SLAM without Keypoint Descriptors

Qiang Fu, Hongshan Yu, Xiaolong Wang, Zhengeng Yang, Yong He, Hong Zhang, Ajmal Mian.

Research output: Contribution to journalArticlepeer-review

7 Citations (Scopus)


Indirect methods for visual SLAM are gaining popularity due to their robustness to environmental variations. ORB-SLAM2 [1] is a benchmark method in this domain, however, it consumes significant time for computing descriptors that never get reused unless a frame is selected as a keyframe. To overcome these problems, we present FastORB-SLAM which is light-weight and efficient as it tracks keypoints between adjacent frames without computing descriptors. To achieve this, a two stage descriptor-independent keypoint matching method is proposed based on sparse optical flow. In the first stage, we predict initial keypoint correspondences via a simple but effective motion model and then robustly establish the correspondences via pyramid-based sparse optical flow tracking. In the second stage, we leverage the constraints of the motion smoothness and epipolar geometry to refine the correspondences. In particular, our method computes descriptors only for keyframes. We test FastORB-SLAM on TUM and ICL-NUIM RGB-D datasets and compare its accuracy and efficiency to nine existing RGB-D SLAM methods. Qualitative and quantitative results show that our method achieves state-of-the-art accuracy and is about twice as fast as the ORB-SLAM2.

Original languageEnglish
Pages (from-to)1433-1446
Number of pages14
JournalIEEE Transactions on Image Processing
Early online date24 Dec 2021
Publication statusPublished - 2022


Dive into the research topics of 'Fast ORB-SLAM without Keypoint Descriptors'. Together they form a unique fingerprint.

Cite this