A new study says that while autonomous vehicle technology has great promise to reduce crashes, it may not be able to prevent all mishaps caused by human error.
Auto safety experts say humans cause about 94% of U.S. crashes, but the Insurance Institute for Highway Safety study says computer-controlled robocars will only stop about one-third of them.
The group says that while autonomous vehicles eventually will identify hazards and react faster than humans, and they won’t become distracted or drive drunk, stopping the rest of the crashes will be a lot harder.
“We’re still going to see some issues even if autonomous vehicles might react more quickly than humans do. They’re not going to always be able to react instantaneously,” said Jessica Cicchino, and institute vice president of research and co-author of the study.
The IIHS studied more than 5,000 crashes with detailed causes collected by the National Highway Traffic Safety Administration, separating those caused by “sensing and perceiving” errors such as distraction of the driver, impaired visibility or failure to detect hazards until too late. Researchers also separated crashes caused by human “incapacitation,” including alcohol or drug-impaired drivers, those who fell asleep or drivers with medical issues. The study has found that self-driving vehicles can prevent those.
However, the robocars will not be able to avoid the others, including statistical errors such as misjudging how quickly another vehicle is going, strategy errors including moving too fast for road conditions and errors in implementation including inaccurate evasive maneuvers or other traffic-control errors.
For instance, if a cyclist or other vehicle suddenly veers into an autonomous vehicle’s path, it may not be able to stop or steer away fast enough in time, Cicchino said. “Autonomous vehicles need to not only perceive the world around them perfectly, they need to respond to what’s around them as well,” she said.
Just how many crashes are prevented, Cicchino said, depends a lot on how autonomous vehicles are programmed. If the robocars obey all traffic laws including speed limits, further crashes would be stopped. But if artificial intelligence allows them to drive and react more like humans, then it will stop fewer crashes, she said.
“Building self-driving cars that drive as well as people do is a big challenge in itself”, IIHS Research Scientist Alexandra Mueller said in a statement.” “But they’d actually need to be better than that to deliver on the promises we’ve all heard.”
Partners for Automated Vehicle Education, a group that includes many self-driving vehicle companies as members, said Thursday that the study mistakenly assumes superior perception and lack of distraction is the only way autonomous vehicles can drive better than humans. For example, autonomous vehicles can be programmed to never break traffic laws that are blamed by the study for 38 per cent of crashes.
“The assumption that these behaviors could be altered by passengers in ways that so dramatically reduce safety is inconsistent with what our members tell us about the culture they bring to AV development,” said a collective statement comprising Ford, General Motors, Waymo, Lyft, Daimler, Volkswagen and others.
Study numbers show that autonomous vehicles would prevent 72 percent or crashes, the group said, but the vehicles are so complex that only a guess is the ultimate impact.
Yet Missy Cummings, a Duke University professor of robotics and human factors who is familiar with the study, said preventing even one-third of the human-caused crashes would give too much credit to the technology. He said that even vehicles with compass, radar, and camera sensors will not often work flawlessly in all circumstances.
“There is a probability that even when all three sensor systems come to bear, that obstacles can be missed”, Cummings said. “No driverless car company has been able to do that reliably. They know that, too.”
Researchers and people in the autonomous vehicle business never thought that the technology would be capable of preventing all the crashes now caused by humans, she said, calling that “layman’s conventional wisdom that somehow this technology is going to be a panacea that is going to prevent all death.” IIHS researchers reviewed the causes of the crash and decided which ones could be prevented, assuming all deaths were preventable. It should avoid far less collisions when self-driving automobiles are combined with human powered cars, she added.
Virginia-based IIHS is a non-profit, auto insurance funded research and education organization.
Alone in California, more than 60 companies have applied to test autonomous vehicles, but without human backup drivers they have yet to start a full-robotic, large-scale ride-hailing service. Several companies including Alphabet Inc.’s Waymo and General Motors Cruise had pledged to do so over the past two years, but those plans were delayed when the industry pulled back after an Uber automated test vehicle hit and killed a pedestrian in Tempe, Arizona in March in 2018. Last year Tesla Inc. CEO Elon Musk vowed to start running a fleet of autonomous robotaxis by 2020. But recently he said he hopes to deploy the system, depending on regulatory approval, with humans monitoring it in early 2021.