Visual-Inertial Slam Using the Extended Kalman Filter for Autonomous Navigation
DOI:
https://doi.org/10.31649/1997-9266-2025-179-2-118-126Keywords:
visual-inertial SLAM, extended Kalman filter, inertial measurement unit, KITTI datasetAbstract
The study is devoted to the use of visual-inertial SLAM based on the extended Kalman filter (EKF) for autonomous navigation tasks. The study used data from the KITTI dataset, which include images from a stereo camera, as well as linear and angular velocities obtained from an inertial measurement device. To determine visual landmarks, it is proposed to use ORB features, which are characterized by their speed of calculation and resistance to changes in lighting, rotation and scaling. In addition, an algorithm for selecting relevant landmarks has been developed, which allows to increase the accuracy and speed of SLAM.
The results of a series of experiments have shown that the efficiency of the system largely depends on the adjustment of the noise ratio of the motion model (Q) and sensor measurements (R). It was determined that the optimal range of Q/R for the studied datasets is 0.001…0.00001.
To achieve a balance between speed and system performance, the influence of the number of relevant visual features in the range from 10 to 120 was investigated. The results of the experiments showed that the optimal number of landmarks is 50…80, and the best results were achieved at Q/R=0.001 or Q/R=0.00001, depending on the dataset used. The trajectory accuracy was assessed using the ATE metric (Absolute Trajectory Error). GPS data was used as ground truth for verification.
The results confirmed that visual-inertial SLAM is an effective tool for autonomous navigation, especially in the absence of GPS. The use of EKF in visual-inertial SLAM reduces computational costs compared to other methods, which makes it optimal for devices with limited resources, in particular mobile robots.
Further research will focus on optimizing landmark selection, improving noise modeling, and adapting the algorithm for monocular cameras to improve SLAM accuracy and efficiency.
Further research will focus on optimizing landmark selection, improving noise modeling, and adapting the algorithm for monocular cameras to improve SLAM accuracy and efficiency. In addition, research on the invariant extended Kalman filter (IEKF) is planned, which has the potential to improve mapping accuracy and enhance system robustness in complex and dynamic environments.
References
А. Жарков, Р. Маслій, і В. Гармаш, «Аналіз підходів Visual SLAM для задачі навігації автономного робота,» Herald Khmelnytskyi Nat. Univ. Tech. sci., т. 335, № 3(1), с. 67-77, 2024. https://doi.org/10.31891/2307-5732-2024-335-3-10 .
M. Quan, S. Piao, M. Tan, and S.-S. Huang, “Accurate Monocular Visual-Inertial SLAM Using a Map-Assisted EKF Approach”, IEEE Access, vol. 7, pp. 34289-34300, 2019. https://doi.org/10.1109/access.2019.2904512 .
Y. Ning, “A Comprehensive Introduction of Visual-Inertial Navigation,” Arxiv.org. 2023. https://arxiv.org/abs/2307.11758 .
J. A. Castellanos, J. Neira, and J. D. Tardós, “Limits to the consistency of EKF-based SLAM,” IFAC Proc., vol. 37, no. 8, pp. 716-721, 2004. https://doi.org/10.1016/s1474-6670(17)32063-3 .
S. Konatowski, P. Kaniewski, and J. Matuszewski, “Comparison of estimation accuracy of EKF, UKF and PF filters,” Annu. Navig., vol. 23, no. 1, pp. 69-87, 2016. https://doi.org/10.1515/aon-2016-0005 .
C. Urrea, and R. Agramonte, “Kalman filter: Historical overview and review of its use in robotics 60 years after its creation,” J. Sensors, pp. 1-21, 2021. https://doi.org/10.1155/2021/9674015 .
G. P. Huang, A. I. Mourikis, and S. I. Roumeliotis, “Observability-based rules for designing consistent EKF SLAM estimators,” Int. J. Robot. Res., vol. 29, no. 5, pp. 502-528, 2009. https://doi.org/10.1177/0278364909353640 .
A. Barrau, and S. Bonnabel, “Invariant kalman filtering,” Annu. Rev. Control, Robot., Auton. Syst., vol. 1, no. 1, pp. 237-257, 2018. https://doi.org/10.1146/annurev-control-060117-105010 .
A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res., vol. 32, no. 11, pp. 1231-1237, 2013. https://doi.org/10.1177/0278364913491297 .
Andreas Geiger, The KITTI vision benchmark suite. [Electronic resource]. Available: https://www.cvlibs.net/datasets/kitti/index.php .
E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An efficient alternative to SIFT or SURF”, in 2011 IEEE Int. Conf. Comput. Vis. (ICCV), Barcelona, Spain, 6–13 Nov. 2011. IEEE, 2011. https://doi.org/10.1109/iccv.2011.6126544 .
Downloads
-
pdf (Українська)
Downloads: 4
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).