Technological Moral Proxies and the Ethical Limits of Automating Decision-Making In Robotics and Artificial Intelligence
MetadataShow full item record
Humans have been automating increasingly sophisticated behaviours for some time now. We have machines that lift and move things, that perform complex calculations, and even machines that build other machines. More recently, technological advances have allowed us to automate increasingly sophisticated decision-making processes. We have machines that decide when and how to use lethal force, when and how to perform life-saving medical interventions, and when and how to maneuver cars in chaotic traffic. Clearly, some automated decision-making processes have serious ethical implications—we are now automating ethical decision-making. In this dissertation, I identify and explore a novel set of ethical implications that stem from automating ethical decision-making in emerging and existing technologies. I begin the argument by setting up an analogy between moral proxies in healthcare, tasked with making ethical decisions on behalf of those who cannot, and technological moral proxies, those machines (artefacts) to which we delegate ethical decision-making. The technological moral proxy is a useful metaphor. It allows us to import the many established norms surrounding moral proxies in bioethics, to inform the ethics, design and governance of certain machines. As is the case with moral proxies in healthcare, we need norms to help guide the design and governance of machines that are making ethical decisions on behalf of their users. Importantly, I argue that we need a methodology and framework to ensure we aren’t subjecting technology users to unacceptable forms of paternalism. Having set up the analogy, I situate the discussion within various theories in science and technology studies (STS). STS provides a rich philosophical vocabulary to help ground the particular ethical implications, including implications for designers and engineers, raised by machines that automate ethical decision-making. Thus, STS helps to frame the issues I raise as design and governance issues. I then provide a rudimentary taxonomy of delegation to help distinguish between automated decision-making that is ethically permissible, and that which is impermissible. Finally, I combine the various pieces of the argument into a practical “tool” that could be used in design or governance contexts to help guide an ethical analysis of robots that automate ethical decision-making.
Request an alternative formatIf you require this document in an alternate, accessible format, please contact the Queen's Adaptive Technology Centre
Showing items related by title, author, creator and subject.
Robinson, DENYS (2015-10-09)Contemporary business ethics asks the question: what moral responsibilities do actors in a market economy have? Specifically, what obligations do corporate managers have? In this paper I consider a new method for answering ...
Taylor, MATTHEW (2015-10-03)This thesis argues that the strongest account of moral rights entails that animals and other marginal cases hold rights. The thesis contends that mutual advantage social contract theories offer the strongest account of ...
Wayne, Katherine (2013-09-17)When it comes to potential children, is to love them to leave them be (nonexistent)? I examine the possibility of virtuous reproduction, as well as some more basic theoretical issues surrounding the nature of moral goodness ...