Show simple item record

dc.contributor.authorJardine, Peter
dc.contributor.otherQueen's University (Kingston, Ont.). Theses (Queen's University (Kingston, Ont.))en
dc.date.accessioned2018-05-25T18:56:51Z
dc.date.available2018-05-25T18:56:51Z
dc.identifier.urihttp://hdl.handle.net/1974/24245
dc.description.abstractThis research investigates the use of learning techniques to select control parameters in the Model Predictive Control (MPC) of autonomous vehicles. The general problem of having a vehicle track a target while adhering to constraints and minimizing control effort is defined. We further expand the problem to consider a vehicle for which the underlying dynamics are not well known. A game of Finite Action-Set Learning Automata (FALA) is used to select the weighting parameters in the MPC cost function. Fast Orthogonal Search (FOS) is combined with a Kalman Filter to simultaneously identify the model while estimating the system states. Planar inequality constraints are used to avoid spherical obstacles. The performance of these techniques is assessed for applications involving ground and aerial vehicles. Simulation and experimental results demonstrate that the combined FOS-FALA architecture reduces the overall number of design parameters that must be selected. The amount of reduction depends on the specific application. For the differential drive robot case considered here, the number for parameters was reduced from six to one. Furthermore, the learning strategy links the selection of these parameters to the desired performance. This is a significant improvement over the typical approach of trial and error.en_US
dc.language.isoenen_US
dc.relation.ispartofseriesCanadian thesesen
dc.rightsCC0 1.0 Universal*
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.rights.urihttp://creativecommons.org/publicdomain/zero/1.0/*
dc.subjectUnmanned Aerial Vehiclesen_US
dc.subjectModel Predictive Controlen_US
dc.subjectLearning Automataen_US
dc.subjectMachine Learningen_US
dc.subjectReinforcement Learningen_US
dc.subjectFast Orthogonal Searchen_US
dc.titleA Reinforcement Learning Approach to Predictive Control Design: Autonomous Vehicle Applicationsen_US
dc.typethesisen
dc.description.degreeDoctor of Philosophyen_US
dc.contributor.supervisorYousefi, Shahram
dc.contributor.supervisorGivigi, Sidney
dc.contributor.departmentElectrical and Computer Engineeringen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record

CC0 1.0 Universal
Except where otherwise noted, this item's license is described as CC0 1.0 Universal