ARTIFICIAL NEURAL NETWORK BASED TRANSFER LEARNING FOR ROBOT KINEMATIC MODELLING
MetadataShow full item record
This thesis discusses the feasibility of applying the transfer learning for modelling robot manipulators. Artificial neural network (ANN) is selected as the tool to conduct transfer learning. For data generation, two types of datasets based on a SCARA-type robot are generated: workspace dataset and path dataset. The ANNs with random initialization are trained using the source dataset, and then the trained initialization (weights and biases) is transferred for transfer learning implementation. The investigation starts with generating workspace datasets. Transfer learning is conducted first between different robot configurations and then between manipulators with different geometric layouts. The entire trained initial weights and biases from the source neural networks are transferred as the initialization of the target neural network. Because entire pretrained neural network model is transferred to the target ANN, this method is named as fully pretrained ANN. The positive effect of improved final training performance can be observed in most ANN configurations. To verify the positive impact on data fitting for the same nonlinear relations. Two different spiral paths are used for dataset generation of target dataset and source dataset. Besides the practice of fully pretrained ANN, other three transfer learning methods are proposed and simulated on the path datasets. As the second method, partial initial parameters are transferred from the trained ANN, when the remaining initial parameters are randomly generated. The third method is to freeze partial ANN and initialize the rest weights and biases randomly, which indicates the frozen weights and biases cannot be adjusted during training. To combine two methods of initialization: initial guess and frozen initialization, the ANN with frozen and pretrained initialization is proposed as the fourth method. For an extensive investigation, diverse ANN configurations are tested for the transfer process. The training is performed on both the ANNs with pretrained initial parameters and the ANNs with random initialization. To compare rate of data convergence, different values of performance targets are defined. The training results of ANNs with random initialization and transferred initialization are compared. It is concluded that, depending on ANNs configurations, the proposed transfer learning methods can improve their final performance differently. Additionally, higher computing efficiency can be observed reaching the specific performance value in most ANNs with transfer learning. Training time can also be shortened by applying transfer learning methods containing frozen initialization.