Self-Driving Cars: Who Is Responsible for Safety and Liability?

In this blog post, we examine the safety and ethical issues surrounding self-driving cars, as well as the question of liability in the event of an accident, from various perspectives.

 

Minority Report is a film directed by Steven Spielberg, based on the 1956 novel of the same name by Philip K. Dick. Set in Washington, D.C., in 2054, the film depicts a society where the PreCrime system has been implemented to arrest criminals before they commit murder. As a science fiction film portraying a cold and bleak future society, Steven Spielberg imagined and depicted various cutting-edge IT devices that seemed plausible at the time. Interestingly, the cutting-edge technologies featured in the film are gradually becoming a reality.
Minority Report features many intriguing scenes, but the most eye-catching one is when the car drives itself down the road while Tom Cruise is unable to drive due to evading pursuers.
When the film was released, autonomous driving systems were considered a technology that would only be realized in the distant future. However, since Google officially announced its plans to develop self-driving cars in 2010, various automotive and IT companies have actively engaged in research and investment in autonomous vehicles, and commercialization is gradually becoming a reality. As self-driving cars have become a trend, manufacturers are paying close attention to this field.
However, self-driving cars are not welcomed by everyone. Some are raising questions about whether consumers can trust these vehicles. For example, in May 2016, a driver using Tesla’s Autopilot feature died in a collision with a passing trailer. The system malfunctioned after mistaking the white trailer for the sky. Subsequently, the U.S. government concluded that Tesla’s Autopilot system was not defective and that the driver bore significant fault for failing to take evasive action at the time of the collision. As a result, Tesla was able to avoid legal liability for the first fatal accident involving an autonomous vehicle.
However, accidents caused by Tesla’s autonomous driving system have continued to occur frequently even after the incident. Statistics from the U.S. National Highway Traffic Safety Administration (NHTSA) reported that 736 traffic accidents related to autonomous driving systems occurred in the U.S. over a four-year period starting in 2019, and 91% of all autonomous driving accidents were caused by Tesla’s Autopilot and Full Self-Driving systems. Consequently, consumers are questioning the safety of autonomous driving systems, and a survey by the U.S. marketing research firm J.D. Power revealed that distrust in autonomous driving has increased.
Tesla accidents have not only affected consumer sentiment. Although regulatory reforms have gained momentum in various countries following these accidents, controversies regarding liability, insurance claims, and legal regulations for autonomous vehicle accidents continue. For example, consider a scenario where a driver in a fully autonomous vehicle simply sets the destination, and the car automatically performs all driving tasks. In this case, should the passenger bear no responsibility for an accident? If so, who is responsible for the accident? Confusion arises over who is responsible: the vehicle owner, the manufacturer, or the government, based on its duty of oversight. The insurance industry argues that manufacturers should bear liability for accidents because they are in a position to control the risks involved. On the other hand, the automotive industry maintains that holding manufacturers 100% responsible for traffic accidents is excessively burdensome.

In Germany, the Autopilot feature in Tesla electric vehicles is prohibited because it is considered an incomplete test version. Meanwhile, countries such as South Korea, Japan, and Europe are planning to establish and adopt standards defining the conditions for autonomous vehicles that can overtake or change lanes without the driver operating the steering wheel.
The issue with autonomous vehicles lies not only in legal liability but also in the fact that ethical judgment comes into play. The ethical issues surrounding autonomous driving systems can be illustrated by a thought experiment presented on the TED-ed YouTube channel. For example, suppose an autonomous vehicle must avoid an object falling from a truck ahead. It faces three choices: first, continuing straight and colliding with the object; second, turning the wheel to the right and colliding with a motorcycle; or third, turning the wheel to the left and colliding with an SUV. In this scenario, a human driver might make a decision based on reflexes, but an autonomous vehicle acts according to judgments pre-programmed by developers. So, on what basis did the developers program these decisions? Could this be viewed as premeditated murder? If the premise is too extreme, let’s consider a case where the vehicle is programmed to follow the ethical judgment of the passenger. Would such a judgment be a better choice than programming designed to minimize harm?
To address such ethical dilemmas, MIT is conducting a public opinion survey game called the Moral Machine.
The Moral Machine is a platform designed to gather public perception regarding the ethical decisions made by artificial intelligence, such as self-driving cars. It simulates scenarios where a driverless car must make an ethical choice—sacrificing either a passenger or a pedestrian—and guides survey participants to make judgments they would find acceptable. For example, in the event of an accident, factors such as the status, physical condition, and age of the passenger and pedestrian are randomly assigned, and participants make a judgment as an outside observer. If self-driving cars were programmed based on these survey results, would it be right to configure them to save as many lives as possible when an unavoidable accident occurs, or is it more desirable to prioritize the lives of the passengers?
If programming based on such value judgments leads to an actual accident, can the system be held free from liability for the accident?
I believe that the legal issues surrounding self-driving cars can be resolved to some extent through agreements between individuals or between individuals and society. However, ethical issues are different. In modern times, rapid technological advancements have consistently raised ethical questions. While scientific and technological advancements enrich our lives, they often create situations that conflict with human ethics. Once self-driving cars are commercialized, the likelihood of life-threatening accidents caused by drowsy driving, drunk driving, reckless driving, or road rage will decrease. Additionally, improved traffic flow will reduce travel time to destinations and increase opportunities to enjoy leisure time. However, self-driving cars cannot avoid ethical issues. These are ethical issues that extend beyond autonomous driving to include artificial intelligence, robotics, and even humanity as a whole. Questions regarding whether we can weigh the value of human life and whether the value of animal life differs from that of human life must be addressed before autonomous vehicles are commercialized.

 

About the author

Writer

I'm a "Cat Detective" I help reunite lost cats with their families.
I recharge over a cup of café latte, enjoy walking and traveling, and expand my thoughts through writing. By observing the world closely and following my intellectual curiosity as a blog writer, I hope my words can offer help and comfort to others.