Format

Send to

Choose Destination
Sci Eng Ethics. 2017 Aug;23(4):951-967. doi: 10.1007/s11948-016-9833-7. Epub 2016 Nov 30.

Who Should Decide How Machines Make Morally Laden Decisions?

Author information

1
John Molson School of Business, Concordia University, Montréal, Canada. dominic.martin@concordia.ca.
2
Department of Philosophy, McGill University, Montréal, Canada. dominic.martin@concordia.ca.

Abstract

Who should decide how a machine will decide what to do when it is driving a car, performing a medical procedure, or, more generally, when it is facing any kind of morally laden decision? More and more, machines are making complex decisions with a considerable level of autonomy. We should be much more preoccupied by this problem than we currently are. After a series of preliminary remarks, this paper will go over four possible answers to the question raised above. First, we may claim that it is the maker of a machine that gets to decide how it will behave in morally laden scenarios. Second, we may claim that the users of a machine should decide. Third, that decision may have to be made collectively or, fourth, by other machines built for this special purpose. The paper argues that each of these approaches suffers from its own shortcomings, and it concludes by showing, among other things, which approaches should be emphasized for different types of machines, situations, and/or morally laden decisions.

KEYWORDS:

Artificial intelligence; Collective decision-making; Economic efficiency; Ethics; Market freedom; Moral agency; Public policies; Regulation; Self-driving car

PMID:
27905083
DOI:
10.1007/s11948-016-9833-7
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Springer
Loading ...
Support Center