Adaptive Vison Aided Integrated Navigation For Dynamic Unknown Environments
Abstract
In this research, a novel method for visual odometry (VO) and the integration with multi-sensors navigation systems for vehicular platforms is proposed. The proposed method partitions the field of single camera view into regions of interests where each region likely contains different types of visual features. By applying computer vision processing techniques, ambiguous pose estimation is calculated up to a scale factor. The proposed method uses aiding measurements from vehicle’s odometer to adaptively resolve the scale factor ambiguity problem in monocular camera systems. Unlike some state-of-art approaches, this work does not depend on offline pre-processing or predefined landmarks or visual maps. In addition, this work addresses unknown uncontrolled environments where moving objects likely exist. Innovative odometer-aided Local Bundle Adjustment (LBA) along with a fuzzy C-mean clustering mechanism is proposed to reject outliers corresponding to moving objects. A Gaussian Mixture approach is also applied to detect visual background regions during stationary periods which enables further rejection of moving objects. Finally, an empirical scoring method is applied to calculate a matching score of the different visual features and to use this score in a Kalman filter as measurement covariance noise to integrate VO-estimated pose changes within a larger multi-sensors integrated navigation system. Experimental work was performed with a physical vehicular platform equipped by MEMS inertial sensors, GPS, speed measurements and GPS-enabled camera. The experimental work includes three testing vehicular trajectories in downtown Toronto and the surrounding areas. The experimental work showed significant navigation improvements during long GPS outages where only VO is fused with inertial sensors and the vehicle’s speed measurements.