Learning to Reason over Natural Language via Accessing Diverse Knowledge Resources

Loading...
Thumbnail Image
Date
Authors
Yang, Xiaoyu
Keyword
Natural Language Processing , Artificial Intelligence , Deep Learning
Abstract
Reasoning is the capability of drawing new conclusions from existing information or knowledge, and it has been a central topic in artificial intelligence. Developing intelligent systems that are capable of understanding natural language input and reasoning to produce conclusions is highly desirable and can benefit a wide range of real-life applications. This dissertation is centered on knowledge-grounded natural language reasoning, in which a reasoning system aims to verify whether a natural language hypothesis is supported by a premise. The premise can be either a natural language sentence, a passage of text, or a structured table. In general, systems that succeed in these tasks should be capable of understanding human language and have certain reasoning skills, including but not limited to handling numerical reasoning, acquiring essential world knowledge, and composing multiple pieces of evidence to arrive at conclusions. Recent years have seen significant progress achieved by deep neural networks across many natural language processing tasks. Deep neural networks are effective at learning from training data, and they can even further benefit from pre-training with large-scale text corpora. In this dissertation, we focus on neural-network approaches and our work targets at several key challenges for existing deep neural networks regarding natural language reasoning tasks: (1) How to handle natural language reasoning that involves numerical inference? (2) How to effectively solve compositionally complex problems that require multistep information aggregation or symbolic operations? (3) Whether and how pre-trained deep neural models can further benefit from human-curated knowledge bases on reasoning tasks? To the end, we propose that incorporating symbolic evidence obtained from symbolic program executions into neural networks can benefit numerical reasoning. Symbolic programs are not only the formal meaning representations of natural language sentences but also reveal the compositional structures of sentences. Thus the programs can guide the automatic complex sentence decomposition. We also propose novel approaches to show that injecting additional lexical knowledge into pre-trained networks can bring extra benefits to reasoning tasks. In addition, we propose to evaluate the advanced neural networks through the lens of causal reasoning and construct a counterfactual reasoning benchmark to support future research.
External DOI