If self-propelled vehicles become widespread, society must cope with the new burden of programming vehicles with preferences for vehicles that prioritize the crash. The human driver instinctively makes this choice, but the algorithm can make it in advance. So do automakers and governments choose to save old or young people? Is it more or less?
The new paper announced today at MIT will examine the public's thoughts on these questions and in 2016 collate the online quiz data, which is called the provocative machine. It asked users to make a series of ethical decisions about fictional car crashes similar to the famous trolley problem. We tested nine individual factors, including male-to-female clashes, killing more lives, killing young or old people, pedestrians or jaywalkers, and even choosing individuals with low status or high status.
Millions of users from 233 countries and territories participated in this quiz and made a total of 40 million ethical decisions. In this data, the authors of the study found that humans care for animals, increase in life rather than minorities, and a consistent worldwide preference for children instead of adults. They suggest that these factors should be viewed as a "component" for policymakers when creating laws for self-propelled vehicles. However, the study emphasized that the algorithm is not a framework for decision making.
"What we are trying to show here is the explanatory ethics: people's preference for ethical decision," said Edmond Awad, co-author of the paper, The Verge . "However, what should be done about 19459013 should be left to experts on normative ethics."
The data also showed a significant change in the ethical preferences of each country. This relates to a number of factors, including geography (eg, the difference between Europe and Asia) and culture (individualism versus collectivist society).
These decisions should be made at some point, but in the future self-driving technology will still have a long way to go. Autonomy is still in its infancy, and the automobile is still a prototype, not a product, despite public perception. Experts also say it is unclear how these decisions will be programmed in the future, but open consultation and discussions are clearly needed.
"What happens with autonomous vehicles can have a major impact on other AI and robotics because they are the first to integrate into a large community." Patrick, Pat, Principal, Ethics + Emerging Science Group, Cal Poly University Lin) said the priest. "Life is literally at risk, so it is important to have a dialogue."
How does culture affect ethical preferences?
The results of ethical machines suggest that there are some common principles for this ethical dilemma. However, the authors of this paper also found a change in preference according to a particular split. Neither of these has overturned this core principle (like collecting a handful of people). But there was some difference.
Researchers found that "far fewer" tend to favor younger looking characters than older ones in Asian and Middle Eastern countries such as China, Japan, and Saudi Arabia. Compared to those who responded in Europe and North America, relatively few people have high net worth.
The authors of the study suggest that this may be due to differences between individualist and collectivist cultures. In the former case, where individual values as individuals are emphasized, "more people are more strongly preferred". On the contrary, a weak preference for seeking young people can be the result of a collectivist culture. "
This variant suggested that geographical and cultural proximity could allow groups of territories to converge on a common preference for machine ethics
19659016] However, there were other factors related to similar factors that were not necessarily geographical: for example, countries with lower per capita income and weak civil society groups "
The author never stresses that the results of a moral machine are a definite assessment of the ethical preferences of another country. At first, the quiz is self-selected Only those who are relatively tech savvy can solve the quiz. The user can only use two options with definite results, except for or persons . In real life, this decision is probabilistic, Choose between individuals of different severity and degree of consequence. ("There is a small chance that you can beat a pedestrian at low speed if you are walking around this truck."
Nonetheless, the expert says that such a quiz is irrelevant No, the chronic characteristic of this dilemma is "a function, not a bug." Lin says, "Because we remove the dirty variables that can focus on the particular item we are interested in."
& # 39; periodically choose to collide with object X or object Y,
Changing the law to ethics
But how much legislation is needed on these issues? When does the company begin to make ethical decisions in its own vehicle?
The short answer to the second question is already there. This is true in the sense that all algorithms make some kind of decision, some of which are ethical. But more specifically, even if the company is reluctant to speak openly, it's likely that rough preferences are coded.
For example, in 2014, Google X founder Sebastian Thrun crashed the company's prototype self-driving vehicle. And in 2016, Google's Chris Urmson said, "I will try my hardest to avoid hitting unprotected road users (cyclists and pedestrians)." In the same year, the Mercedes-Benz manager said the company's chairman would put the driver's car in front of the passenger's life. The company later denied this, but said it was misquoted.
It is understandable that the company is not willing to open this decision. On the one hand, self-driving systems are not yet sophisticated enough to distinguish young people from older people. State-of-the-art algorithms and sensors can make a clear distinction between squirrels and bicyclists, but they are not much more subtle. Anyone who thinks a company, person or animal, passenger, or pedestrian is prioritizing can be confused. This is the ethical dilemma : not an easy answer.
Andrea Renda, a senior researcher at the European Center for Policy Studies, says private companies are making the most of these questions. Renda The Verge says, "The private sector is taking action on this, but the government may not be able to take it into account." The European Union is making ethical guidelines in Europe and said it will implement it through mandate and control measures or certification and co-regulation. The US Congress has announced bipartisan principles for potential regulation, but it is unclear whether federal bureaucrats even want to plunge into ethical preferences for car accidents.
Renda needs the public to participate in these discussions, but he said, "Governments and experts should make a choice to reaffirm human rights, saying that they should rely solely on bottom-up consultations."
In Germany, the only country where official guidelines are laid out, the problem is already evident. Parliamentarians must evaluate all human life equally and to say that any distinction based on personal characteristics such as age or gender should be banned But as MIT researchers pointed out, if this choice were to be implemented, they would oppose the public's strong preference for protecting younger people than the elderly. Sacrifice children in a dilemma situation
Awad argues that this kind of conflict is inevitable but a part of the process. "What is important is to make this decision transparent. All this happens behind the scenes and I do not think people can accept it if they say, "Just trust us." Everyone should be involved in this decision. "