NHTSA Confirms Google’s Self-Driving Software Can Be Considered The Driver Under Federal Law

It’s official. After months of conjecture, hypothetical questioning and what-ifs, the artificial intelligence inside an autonomous vehicle can be considered the driver of said vehicle under federal law.

As Reuters explained earlier this afternoon, the National Highway Traffic Safety Administration — the U.S. federal agency responsible for writing and enforcing Federal Motor Vehicle Safety Standards and ultimately the gatekeeper for which cars can and can’t drive on the nation’s roads — has decreed that the software algorithms inside a Google self-driving car can be considered its ‘driver’ for legal purposes.

Google's autonomous cars could be considered a driver, says NHTSA.

Google’s autonomous car software could be considered the driver, says NHTSA.

Back on November 12 last year, the Google Self-Driving Car project, part of Software company Alphabet Inc. (formerly Google), submitted a proposal to NHTSA in which it described a self-driving car that had no need for a human driver. The plans detailed the hardware and software that would be present in such a car in order to replace a human driver, as well as the safety protocols designed to protect occupants and other road users from vehicle malfunction.

It also highlighted the biggest challenge of autonomous vehicles: humans undermining autonomous driving systems by suddenly grasping the steering wheel or reaching for the brake pedal. In short, it suggested, humans behind the wheel are the weak link in an autonomous drive setup.

After careful consideration, NHTSA responded on February 4 with a letter addressed to Google in which it laid out the legal framework by which it would refer to such a vehicle.

“NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle components,” wrote NHTSA’s Chief Counsel Paul Hemmersbaugh. “We agree with Google. Its (self-driving car) will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years.”

It's hard to convey how important this decision is...

It’s hard to convey how important this decision is…

“Google expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking… could be detrimental to safety because the human occupants could attempt to override the (self-driving system’s) decisions,” it continued.

This landmark decision not only validates Google’s hard work into autonomous vehicles but also the future rights and duties of artificial intelligence. It also means that society is one step closer to handing over control of a fast-moving vehicle to a mass of computer code than it was. But while the decision clarifies the position of an autonomous vehicle and its software in terms of motor vehicle regulations, it’s only the first of many steps towards fully-autonomous mass-produced vehicles.

It also opens up a whole new set of questions concerning vehicle insurance and liability, not to mention a host of philosophical ones.

Prior to today’s announcement, the expectation was that an autonomous vehicle being operated on the public highway still required a fully licensed and alert human driver to supervise autonomous vehicle operation. That person, sat behind the wheel, would ultimately be considered at fault in the event of an accident or traffic violation, even if the car was in autonomous drive mode at the time.

Now, those lines are more blurred. If the software within the car can be considered a driver under law, then we think it follows that vehicle’s software — and thus the company behind it — will be liable for any accidents or violations which occur during autonomous vehicle operation. Essentially for the first time in autonomous vehicle history, an artificial intelligence could be considered accountable for its actions.

As NHTSA notes, “the next question is whether and how Google could certify that the (self-driving system) meets a standard developed and designed to apply to a vehicle with a human driver.” Although it has considered waving some safety rules to allow more driverless cars to be developed and tested on the public highway, the agency said that for now at least, autonomous vehicles would need to offer the same control surfaces found in any other motor vehicle.

We think it will only be a matter of time before Tesla's autopilot is considered in the same light.

We think it will only be a matter of time before Tesla’s autopilot is considered in the same light.

Only when existing legislation and regulation has been carefully rewritten to take into account the requirements for autonomous vehicles will companies like Google be exempt from including wheels and pedals in its vehicles.

Even then, sufficient checks and balances will still need to be satisfied before NHTSA allows such vehicles on the road. A first draft of the guidelines by which NHTSA will make such decisions will be written by the agency over the next six months, with larger deployment of prototype autonomous vehicles expected to take place as soon as the federal agency has satisfied itself that the vehicles in question are safe.

As with any governmental agency, the process of certifying Google and other autonomous vehicles for true driverless operation may take years or even decades to fully mature. But with humans removed from the process of driving a car for the first time — at least at a legal level — the days of fully-autonomous Johnny Cabs are closer than they were.

______________________________________

Want to keep up with the latest news in evolving transport? Don’t forget to follow Transport Evolved on Twitter, like us on Facebook and G+, and subscribe to our YouTube channel.

You can also support us directly as a monthly supporting member by visiting Patreon.com.

Related News