Evaluating Explanation Correctness in Legal Decision Making

Loading...
Thumbnail Image

Authors

Luo, Chu Fei
Bhambhoria, Rohan
Dahan, Samuel
Zhu, Xiaodan

Date

2022

Type

journal article

Language

en

Keyword

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

As machine learning models are being extensively deployed across many applications, concerns are rising with regard to their trustability. Explainable models have become an important topic of interest for high-stakes decision making, but their evaluation in the legal domain still remains seriously understudied; existing work does not have thorough feedback from subject matter experts to inform their evaluation. Our work here aims to quantify the faithfulness and plausibility of explainable AI methods over several legal tasks, using computational evaluation and user studies directly involving lawyers. The computational evaluation is for measuring faithfulness, how close the explanation is to the model’s true reasoning, while the user studies are measuring plausibility, how reasonable is the explanation to a subject matter expert. The general goal of this evaluation is to find a more accurate indication of whether or not machine learning methods are able to adequately satisfy legal requirements.

Description

Citation

Luo, Chu Fei et al. "Evaluating Explanation Correctness in Legal Decision Making" (2022). 35 Canadian Conference on Artificial Intelligence.

Publisher

Canadian Artificial Intelligence Association

Journal

Volume

Issue

PubMed ID

ISSN

EISSN