Format

Send to

Choose Destination
Nature. 2018 Nov;563(7729):59-64. doi: 10.1038/s41586-018-0637-6. Epub 2018 Oct 24.

The Moral Machine experiment.

Author information

1
The Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA.
2
Department of Human Evolutionary Biology, Harvard University, Cambridge, MA, USA.
3
Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada. shariff@psych.ubc.ca.
4
Toulouse School of Economics (TSM-R), CNRS, Université Toulouse Capitole, Toulouse, France. jean-francois.bonnefon@tse-fr.eu.
5
The Media Lab, Massachusetts Institute of Technology, Cambridge, MA, USA. irahwan@mit.edu.
6
Institute for Data, Systems & Society, Massachusetts Institute of Technology, Cambridge, MA, USA. irahwan@mit.edu.

Abstract

With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour. To address this challenge, we deployed the Moral Machine, an online experimental platform designed to explore the moral dilemmas faced by autonomous vehicles. This platform gathered 40 million decisions in ten languages from millions of people in 233 countries and territories. Here we describe the results of this experiment. First, we summarize global moral preferences. Second, we document individual variations in preferences, based on respondents' demographics. Third, we report cross-cultural ethical variation, and uncover three major clusters of countries. Fourth, we show that these differences correlate with modern institutions and deep cultural traits. We discuss how these preferences can contribute to developing global, socially acceptable principles for machine ethics. All data used in this article are publicly available.

PMID:
30356211
DOI:
10.1038/s41586-018-0637-6

Supplemental Content

Full text links

Icon for Nature Publishing Group
Loading ...
Support Center