Technological Moral Proxies and the Ethical Limits of Automating Decision-Making In Robotics and Artificial Intelligence

Loading...
Thumbnail Image

Authors

Millar, Jason

Date

2015-10-29

Type

thesis

Language

eng

Keyword

human robot interaction , science and technology studies , autonomous cars , internal cardiac defibrillators , bioethics , social robotics , anthropomorphism , applied ethics , robotics governance , design ethics , robot ethics , engineering ethics , machine ethics

Research Projects

Organizational Units

Journal Issue

Alternative Title

Abstract

Humans have been automating increasingly sophisticated behaviours for some time now. We have machines that lift and move things, that perform complex calculations, and even machines that build other machines. More recently, technological advances have allowed us to automate increasingly sophisticated decision-making processes. We have machines that decide when and how to use lethal force, when and how to perform life-saving medical interventions, and when and how to maneuver cars in chaotic traffic. Clearly, some automated decision-making processes have serious ethical implications—we are now automating ethical decision-making. In this dissertation, I identify and explore a novel set of ethical implications that stem from automating ethical decision-making in emerging and existing technologies. I begin the argument by setting up an analogy between moral proxies in healthcare, tasked with making ethical decisions on behalf of those who cannot, and technological moral proxies, those machines (artefacts) to which we delegate ethical decision-making. The technological moral proxy is a useful metaphor. It allows us to import the many established norms surrounding moral proxies in bioethics, to inform the ethics, design and governance of certain machines. As is the case with moral proxies in healthcare, we need norms to help guide the design and governance of machines that are making ethical decisions on behalf of their users. Importantly, I argue that we need a methodology and framework to ensure we aren’t subjecting technology users to unacceptable forms of paternalism. Having set up the analogy, I situate the discussion within various theories in science and technology studies (STS). STS provides a rich philosophical vocabulary to help ground the particular ethical implications, including implications for designers and engineers, raised by machines that automate ethical decision-making. Thus, STS helps to frame the issues I raise as design and governance issues. I then provide a rudimentary taxonomy of delegation to help distinguish between automated decision-making that is ethically permissible, and that which is impermissible. Finally, I combine the various pieces of the argument into a practical “tool” that could be used in design or governance contexts to help guide an ethical analysis of robots that automate ethical decision-making.

Description

Thesis (Ph.D, Philosophy) -- Queen's University, 2015-10-29 14:15:46.57

Citation

Publisher

License

Queen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canada
ProQuest PhD and Master's Theses International Dissemination Agreement
Intellectual Property Guidelines at Queen's University
Copying and Preserving Your Thesis
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.
This publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.

Journal

Volume

Issue

PubMed ID

External DOI

ISSN

EISSN