Neuro-Symbolic Methods for Natural Language Inference and Question Answering

dc.contributor.authorFeng, Yufeien
dc.contributor.departmentElectrical and Computer Engineeringen
dc.contributor.supervisorZhu, Xiaodan
dc.contributor.supervisorGreenspan, Michael
dc.date.accessioned2022-10-04T23:56:12Z
dc.date.available2022-10-04T23:56:12Z
dc.degree.grantorQueen's University at Kingstonen
dc.description.abstractOne of the fundamental problems in deep learning research is how to design neural network models to incorporate logic and symbolic operations. Although deep neural network models have achieved state-of-the-art performance on multiple natural language processing benchmarks, those black-box models can hardly provide explanations for their inner mechanisms. They still lack the ability to perform systematic reasoning like human beings and generalize poorly to out-of-distribution samples. In this thesis, we attempt to overcome these limitations by designing neuro-symbolic models that combine neural network models with natural logic theory. At the lower level, we use neural networks to model the text representation and produce intermediate predictions, while at the higher level, we leverage symbolic operations to perform reasoning, which leads to the final prediction. We apply our neuro-symbolic models to solve the natural language inference (NLI) task, and we also explore ways to extend our method to question answering (QA). This thesis offers a set of contributions that address the problem of effectively combining deep neural networks with symbolic methods. The first contribution is a novel end-to-end differentiable natural logic model for NLI. Our proposed model achieves empirically competitive results on the Stanford NLI benchmark and multiple stress-test datasets. Our model also provides faithful explanations for its decisions based on natural logic. The second contribution is a novel neuro-symbolic NLI model, which overcomes the limitations of its predecessor by leveraging a well-designed natural logic program and reinforcement learning. We also propose an introspective revision algorithm that incorporates commonsense knowledge bases to alleviate the spurious reasoning problem and improve training efficiency. The third contribution is an extension of the neuro-symbolic method to multi-hop QA applications. We propose a model that accurately locates chains of useful evidence, which can be trained without direct supervision, and a neuro-symbolic QA model that performs natural-logic style reasoning on the chains of evidence.en
dc.description.degreePhDen
dc.identifier.urihttp://hdl.handle.net/1974/30459
dc.language.isoengen
dc.relation.ispartofseriesCanadian thesesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canada*
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreement*
dc.rightsIntellectual Property Guidelines at Queen's University*
dc.rightsCopying and Preserving Your Thesis*
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.*
dc.rightsAttribution 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by/3.0/us/*
dc.subjectDeep Learningen
dc.subjectNeuro-Symbolic Methoden
dc.subjectNatural Language Processingen
dc.subjectMachine Learningen
dc.subjectNatural Language Inferenceen
dc.subjectQuestion Answeringen
dc.subjectExplainable AIen
dc.titleNeuro-Symbolic Methods for Natural Language Inference and Question Answeringen
dc.typethesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Yufei_Feng_202210_PhD.pdf.pdf
Size:
1.28 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.67 KB
Format:
Item-specific license agreed upon to submission
Description: