It’s a question that scientists, insurance brokers and auto industry executives have posed time and time again of the dawn of self-driving cars: what happens when a self-driving car gets into an accident?
Depending on whom you ask, the answer varies, although at this moment in time, it’s usually a hypothetical, sometimes ethically-charged one. For some, the thought that there’s any chance an autonomous vehicle could be involved in a crash is simply proof that autonomous vehicles aren’t ready for prime time. For others, the inevitability of a self-driving car being in an accident does not mean that the car itself is flawed but rather that it should do everything it can to protect its occupants and other road users, sacrificing itself if necessary in accordance with the Three Laws of Robotics.
Yet as the Associated Press reports, autonomous drive vehicles being tested on the roads of California under its autonomous vehicle test program are getting involved in accidents just like any other car on the road, with three accidents reported involving Google’s fleet of self-driving cars and a fourth involving one of Delphi Automotive’s self-driving cars.
To date, all of the accidents involving Google autonomous vehicles were “caused by human error and inattention.”
These reported accidents — mandatory under California Law — might be enough to justify the suspicions of autonomous-drive skeptics. Indeed, tell anyone that Google openly confirms that its fleet of 23 self-driving Lexus SUVs have amassed “a handful of minor fender-benders,” and it’s easy to see their imagination run amok.
We’d think the same too, save for one simple fact: in each and every accident reported in California to date involving an autonomous vehicle, the car and its autonomous drive software has not been at fault.
As the Associated Press details, the breakdown for accidents is pretty telling. Of the four accidents recorded — three involving Google vehicles and one involving a Delphi Automotive self-driving car — two occurred when a human was driving. Two occurred when the vehicles were in autonomous drive mode.
Although Google would not speak in-depth about the accidents, an anonymous source familiar with the circumstances of all four collisions said that in each case the autonomous vehicles were travelling at speeds of less than 10 mph. In addition, an official Google statement, short and sweet, confirmed that all of the accidents involving its vehicles thus far were “caused by human error and inattention.” Similarly, Delphi Automotive, more forthcoming about its single autonomous vehicle accident, says that one of its two autonomous Audi SQ5 SUVs was “moderately damaged” when it was broadsided by another car while waiting to make a left turn.
In the case of Delphi Automotive, its car was not in autonomous drive mode at the time of the accident, making it a non-fault accident on the part of the driver.
It isn’t clear if the other non-autonomous accident — which logic dicates involved a Google car — was also a non-fault on the part of the human behind the wheel. But since the two remaining accidents occurred with the car in charge rather than the person — and Google has cleared the car from any wrongdoing — we can conclude that the autonomous-mode accidents were also non-fault.
In each case, the autonomous vehicle hasn’t careened off course. It hasn’t caused a rampage of destruction. And it hasn’t caused an accident, only been involved in one.
Google’s own data suggests it has suffered three accidents in its fleet over the course of around 140,000 miles. That’s higher than the national rate for reported accidents of around 0.3 accidents per 100,000 miles as per the National Highway Traffic Safety Administration.
But as Google points out, it must notify the relevant agencies every time one of its self-driving cars is involved in a collision. Thousands of accidents — especially minor ones — go unreported every day in the busiest U.S. cities, settled without involving either the long arm of the law or the complexity of an insurance claim.
Questions over the safety of autonomous cars will continue for many years to come, and some skeptics won’t be appeased by the data thus far.
For those who are eagerly following the path of autonomous vehicles to market however, the signs are promising: for now, the only person at fault is the human, proving that perhaps Tesla CEO Elon Musk’s prediction for the future of autonomous vehicles is correct.
In the future, it will be ourselves — not our cars — that we don’t trust driving.
You can also support us directly as a monthly supporting member by visiting Patreon.com.