Show simple item record

dc.contributor.authorRahman, Muhammeden
dc.date.accessioned2017-09-06T23:37:49Z
dc.date.available2017-09-06T23:37:49Z
dc.identifier.urihttp://hdl.handle.net/1974/22653
dc.description.abstractSafe, efficient, and comfortable travel has always been a fundamental human necessity. From reducing commute time to saving lives, autonomous vehicles promise to be an indispensable tool in the age of modern urban transportation. However, implementing such a disruptive technology comes with significant challenges, a major one being accurate vehicle positioning and localization. In situations such as urban cores, parking lots, and under dense foliage, positioning information provided by the Global Navigation Satellite System (GNSS) deteriorates significantly. GNSS is increasingly integrated with other systems such as the inertial navigation system (INS) and Visual Odometry (VO) to bridge these outages. The major drawback of current integration systems is that they are unable to provide a stable navigation estimate during extended and frequent GNSS outages, situations common to autonomous vehicles. To improve the overall system accuracy, this thesis presents a multi-sensor navigation solution that integrates GNSS with a low-cost inertial measurement unit (IMU) and VO. Firstly, a switching architecture is detailed, which implements a loosely coupled extended Kalman filter (EKF) that fuses VO updates only when GPS is unavailable, reducing computational resources while limiting the errors inherent in an INS. Secondly, termed R-AVO (Reduced - Aided Visual Odometry), the orientation and translation of a vehicle, as determined by a low drift INS algorithm, is used to evaluate which feature matches from the processed image frames are to be considered in VO. This method is shown to be both more accurate and less computationally expensive than classic feature outlier rejection schemes, which are iterative in nature. Reducing the computation complexity of VO paves the way for even more sensors to be utilized by a vehicle. The developed algorithms are evaluated on several real road trajectories, including simulated outages that are as long as 10 minutes. To ensure generic implementations, two different land vehicles are used in the trajectories and two different IMUs are tested.en
dc.language.isoengen
dc.relation.ispartofseriesCanadian thesesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectAutonomous Vehicleen
dc.subjectVisual Odometryen
dc.subjectMulti Sensor Fusionen
dc.subjectInertial Navigation Systemen
dc.subjectGlobal Positioning Systemen
dc.titleIntegrated Visual Odometry For Improved Autonomous Vehicle Navigationen
dc.typethesisen
dc.description.degreeM.A.Sc.en
dc.contributor.supervisorNoureldin, Aboelmagden
dc.contributor.supervisorGivigi, Sidneyen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.degree.grantorQueen's University at Kingstonen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record