Theoretical and Practical Considerations of Learning Algorithms in Image-guided Interventions
Image-guided Interventions (IGI) have revolutionized medical practice and improved the quality of life for patients by enabling less invasive surgeries and treatments. Recent advancements in computing power and learning algorithms, e.g., Deep Learning (DL), have allowed automated learning-based systems to identify diseases with accuracies comparable to clinicians, adding further capabilities and potential to IGI. In a typical IGI, image registration methods are utilized to incorporate pre-operatively identified lesions/information into the intra-operative procedure. This makes image registration a fundamental part of IGI. Furthermore, learning algorithms are utilized in both improving pre-operative to intra-operative information integration as well as enhancing interpretation of intra-operative images, procedures, and guidance. The overall objective of this thesis is to identify and address several important considerations in each of these elements of IGI. In particular, for image registration, we propose solutions to enhance the explainability and interpretability of registration methods through a novel theoretical framework that connects the conventional information-theoretic based algorithms to the new deep metric-based methods; furthermore, we propose a new image registration approach based on deep probabilistic multi-class classifiers which can be utilized to estimate the uncertainty of registration. We test and evaluate our solutions in image registration on simulated neurosurgical data, and in a study using radiation therapy data from our local health sciences centre. To enhance the interpretation of intra-operative imaging, we propose an application of unsupervised representation learning that considers the vagaries and noisy nature of the gold-standard labeling in biopsy data. We demonstrate that by mapping biopsy samples with similar properties close to each other on a hyper-lattice, we are able to successfully cluster regions of malignant and benign tissue in the prostate. We also investigate different techniques for pre- and intra-operative information fusion through deep learning. We demonstrate the significant potential of multi-modality integration of information from MRI and ultrasound to improve prostate cancer detection. Our developed solutions enable the generation of cancer probability maps that can augment prostate biopsies and improve targeting of cancer foci. Finally, we develop an open-source browser-based framework to make the deployment of learning algorithms in IGI more accessible to practitioners and for educational purposes.
URI for this recordhttp://hdl.handle.net/1974/27722
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
The following license files are associated with this item: