Building Interpretable Learning Models With Evolutionary Algorithms

Loading...
Thumbnail Image

Authors

Rochwerg, Ron

Date

2024-10-02

Type

thesis

Language

eng

Keyword

XAI , Evolutionary Computing , Linear Genetic Programming , Interpretable Models

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

The field of explainable artificial intelligence has gained substantial attention in response to the concerns of trust and accountability in machine learning and artificial intelligence models. Many research paths focus on explaining black-box models using post hoc explanations, but these are only estimations and can often be incorrect. Using intrinsically interpretable models can lead to better explanations but they typically suffer from poor predictive performance and thus do not model data with multiple underlying generative rules well. We propose an evolutionary algorithm that uses a multi-objective approach to partition the decision boundary of classification problems. This algorithm facilitates training multiple interpretable prediction models on disjoint sections of the data space. The multi-objective boundary partitioning algorithm was used on synthetic data with various interpretable models. The results show an increase in predictive performance while maintaining the interpretability of individual models.

Description

Citation

Publisher

License

Queen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canada
ProQuest PhD and Master's Theses International Dissemination Agreement
Intellectual Property Guidelines at Queen's University
Copying and Preserving Your Thesis
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
Attribution-ShareAlike 4.0 International

Journal

Volume

Issue

PubMed ID

External DOI

ISSN

EISSN