A Reinforcement Learning Approach to Predictive Control Design: Autonomous Vehicle Applications
This research investigates the use of learning techniques to select control parameters in the Model Predictive Control (MPC) of autonomous vehicles. The general problem of having a vehicle track a target while adhering to constraints and minimizing control effort is defined. We further expand the problem to consider a vehicle for which the underlying dynamics are not well known. A game of Finite Action-Set Learning Automata (FALA) is used to select the weighting parameters in the MPC cost function. Fast Orthogonal Search (FOS) is combined with a Kalman Filter to simultaneously identify the model while estimating the system states. Planar inequality constraints are used to avoid spherical obstacles. The performance of these techniques is assessed for applications involving ground and aerial vehicles. Simulation and experimental results demonstrate that the combined FOS-FALA architecture reduces the overall number of design parameters that must be selected. The amount of reduction depends on the specific application. For the differential drive robot case considered here, the number for parameters was reduced from six to one. Furthermore, the learning strategy links the selection of these parameters to the desired performance. This is a significant improvement over the typical approach of trial and error.
URI for this recordhttp://hdl.handle.net/1974/24245
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
The following license files are associated with this item: