Show simple item record

dc.contributor.authorHewitt, Robert
dc.contributor.otherQueen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.))en
dc.date.accessioned2018-03-27T13:57:43Z
dc.date.available2018-03-27T13:57:43Z
dc.identifier.urihttp://hdl.handle.net/1974/23989
dc.description.abstractWhere am I? This question continues to be one of the fundamental challenges posed in robotics research. The ability of a robot to localize itself and map its environment has proven to be a difficult and rich research problem. While significant progress has been made it still remains a difficult task to perform in dynamic, 3D environments, over long distances. Stereo cameras are a proven workhorse for the task of Visual Odometry (VO) and three-dimensional Simultaneous Localization and Mapping (SLAM), but they require reliable lighting conditions and matching regions in both images. Light Detection And Ranging (LiDAR) sensors provide an alternative; they are lighting-invariant, provide dense depth information directly, and intensity information that resembles grayscale camera images. In many cases where lighting is unavailable or inconsistent, such as underground mining or planetary exploration, LiDAR is particularly suited for the task of localization. Both VO and SLAM can use a type of nonlinear optimization called bundle adjustment (BA) to solve for the optimal sensor pose and landmark positions given a set of matched observations at two or more separate poses. This thesis develops a version of BA, called IntenseBA. The algorithm estimates a map of landmarks, augmenting the standard three-dimensional point landmark with surface normal and reflectivity states. Because LiDAR intensity observations are dependent on the sensor pose and these landmark states, it is able to use observations of the landmarks to probabilistically determine the most likely estimate of the sensor pose and landmarks. The problem is shown to be observable in all states and an analysis of its sensitivity to noise in each observation is done through a simulation. A calibration procedure and analysis of modern keypoint algorithms is presented which allows the theoretical model to be applied to real data from a SwissRanger SR4000 Time-of-Flight (ToF) camera. Experiments were conducted in the European Space Agency's Planetary Utilisation Testbed, which emulates a Martian terrain. These experiments tested the IntenseBA algorithm (used to perform VO) and show the algorithm can accurately map all state estimates and improve upon accuracy compared to traditional and state-of-the-art approaches by incorporating these additional observations.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesCanadian thesesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.subjectRoboticsen_US
dc.subjectTime-of-Flighten_US
dc.subjectLocalizationen_US
dc.subjectMappingen_US
dc.subjectVisual Odometryen_US
dc.subjectSLAMen_US
dc.subjectLiDARen_US
dc.subjectNavigationen_US
dc.titleIntense Navigation: Using Active Sensor Intensity Observations To Improve Localization and Mappingen_US
dc.typeThesisen
dc.description.degreeDoctor of Philosophyen_US
dc.contributor.supervisorMarshall, Joshua
dc.contributor.departmentElectrical and Computer Engineeringen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record