Using crowdsourcing to teach machines?

In a summer course about Intelligence and Security in Cyberspace, which I attended last week, one of the speakers (Enrique Ávila, director of INCIBE) talked about the conflict that arise regarding autonomous cars decisions. In the case in which an autonomous car should choose between two options, getting always physically harmful consequences for someone, what should they do?

Example: the brakes of one of these cars fail and there are two options: A) collide against a wall, so that all occupants of the car die, or B) dodge the wall invading the opposite lane and running over three people. What should the car do? This situation is called a “social dilemma”, and the first approach was made in the nineteenth century by the economist William Foster Lloyd.

To answer this question, MIT developed the “Moral Machine” (M.M.). The M.M. is a crowdsourcing experiment that aims to study the ethics of autonomous machines: different scenarios are presented to the crowd, as described above, and an option must be chosen. Do you star the car, causing your driver to die, or run over a doctor, a woman and a baby crossing the street? Do you run over two women and a man doing sport, accompanied by two doctors, or run over three old women?

Ejemplo de dilema moral en MIT Moral Machine
Example of the MIT Moral Machine dilema

In this way, through this crowdsourcing initiative in which the task is to choose the least bad option (since in the vast majority of cases there is no morally appropriate solution, such as crashing the empty car), MIT seeks two objectives. On the one hand to obtain an overview of how people consider that machines should decide regarding these moral dilemmas. On the other hand, through a section of their website, they compile and discuss possible scenarios in which these dilemmas are given. In other words, the MIT let you create your own custom dilemmas.

When “resolving” these dilemmas, there are several approaches that can be taken. From doing nothing (if you weren’t there, someone would die and it wouldn’t be your fault), until  a purely utilitarian vision, where the right decisions are the ones that provide the most happiness to a higher number of people (the least bad ones).

The crowdsourcing experiment is curious, from my point of view, for two reasons.

First of all, placing yourself in a situation where you know that there is no correct solution, it allows you to realize how complicated it is to design the behavior of an autonomous devices that can affect the safety of people.

On the other hand, at the end of the experiment there is a small survey that, together with the solutions choose in the dilemmas, gives a profile of what has been your “pattern” of choice: if you preferred to save adults rather than children, or save people with greater social prestige (doctors) rather than others with lesser prestige (thieves), etc. This reveals  the criteria you’ve followed.

Obviously, each person who does this experiment will have different criteria and different answers. Therefore, this experiment serves, as they say, to know the opinion of the crowd in a global way. But in any case it should be the basis for programming a device of this type, even in situations where there is a widespread consensus: the crowd can also be wrong. I have clear what I would choose between a cat and a person for anthropological reasons … but the crowd could decide something different.

The crowd is able to discover solutions to complicated problems, provide innovative ideas, etc. But its criteria can not be used for moral issues. Morality, whether a thing is right or not, as I said before, should not and can not depend on the general consensus.

In this sense, the experiment never says that the results will be used to program an autonomous device. They only drop a question in the final survey: “Do you think your answers will be used to teach intelligent machines?” I hope no.

Leave a Reply

Your email address will not be published.