Google now has software designed to help it detect school busses.

NHTSA Confirms Google’s Self-Driving Software Can Be Considered The Driver Under Federal Law

It’s official. After months of conjecture, hypothetical questioning and what-ifs, the artificial intelligence inside an autonomous vehicle can be considered the driver of said vehicle under federal law.

As Reuters explained earlier this afternoon, the National Highway Traffic Safety Administration — the U.S. federal agency responsible for writing and enforcing Federal Motor Vehicle Safety Standards and ultimately the gatekeeper for which cars can and can’t drive on the nation’s roads — has decreed that the software algorithms inside a Google self-driving car can be considered its ‘driver’ for legal purposes.

Google's autonomous cars could be considered a driver, says NHTSA.

Google’s autonomous car software could be considered the driver, says NHTSA.

Back on November 12 last year, the Google Self-Driving Car project, part of Software company Alphabet Inc. (formerly Google), submitted a proposal to NHTSA in which it described a self-driving car that had no need for a human driver. The plans detailed the hardware and software that would be present in such a car in order to replace a human driver, as well as the safety protocols designed to protect occupants and other road users from vehicle malfunction.

It also highlighted the biggest challenge of autonomous vehicles: humans undermining autonomous driving systems by suddenly grasping the steering wheel or reaching for the brake pedal. In short, it suggested, humans behind the wheel are the weak link in an autonomous drive setup.

After careful consideration, NHTSA responded on February 4 with a letter addressed to Google in which it laid out the legal framework by which it would refer to such a vehicle.

“NHTSA will interpret ‘driver’ in the context of Google’s described motor vehicle design as referring to the (self-driving system), and not to any of the vehicle components,” wrote NHTSA’s Chief Counsel Paul Hemmersbaugh. “We agree with Google. Its (self-driving car) will not have a ‘driver’ in the traditional sense that vehicles have had drivers during the last more than one hundred years.”

It's hard to convey how important this decision is...

It’s hard to convey how important this decision is…

“Google expresses concern that providing human occupants of the vehicle with mechanisms to control things like steering, acceleration, braking… could be detrimental to safety because the human occupants could attempt to override the (self-driving system’s) decisions,” it continued.

This landmark decision not only validates Google’s hard work into autonomous vehicles but also the future rights and duties of artificial intelligence. It also means that society is one step closer to handing over control of a fast-moving vehicle to a mass of computer code than it was. But while the decision clarifies the position of an autonomous vehicle and its software in terms of motor vehicle regulations, it’s only the first of many steps towards fully-autonomous mass-produced vehicles.

It also opens up a whole new set of questions concerning vehicle insurance and liability, not to mention a host of philosophical ones.

Prior to today’s announcement, the expectation was that an autonomous vehicle being operated on the public highway still required a fully licensed and alert human driver to supervise autonomous vehicle operation. That person, sat behind the wheel, would ultimately be considered at fault in the event of an accident or traffic violation, even if the car was in autonomous drive mode at the time.

Now, those lines are more blurred. If the software within the car can be considered a driver under law, then we think it follows that vehicle’s software — and thus the company behind it — will be liable for any accidents or violations which occur during autonomous vehicle operation. Essentially for the first time in autonomous vehicle history, an artificial intelligence could be considered accountable for its actions.

As NHTSA notes, “the next question is whether and how Google could certify that the (self-driving system) meets a standard developed and designed to apply to a vehicle with a human driver.” Although it has considered waving some safety rules to allow more driverless cars to be developed and tested on the public highway, the agency said that for now at least, autonomous vehicles would need to offer the same control surfaces found in any other motor vehicle.

We think it will only be a matter of time before Tesla's autopilot is considered in the same light.

We think it will only be a matter of time before Tesla’s autopilot is considered in the same light.

Only when existing legislation and regulation has been carefully rewritten to take into account the requirements for autonomous vehicles will companies like Google be exempt from including wheels and pedals in its vehicles.

Even then, sufficient checks and balances will still need to be satisfied before NHTSA allows such vehicles on the road. A first draft of the guidelines by which NHTSA will make such decisions will be written by the agency over the next six months, with larger deployment of prototype autonomous vehicles expected to take place as soon as the federal agency has satisfied itself that the vehicles in question are safe.

As with any governmental agency, the process of certifying Google and other autonomous vehicles for true driverless operation may take years or even decades to fully mature. But with humans removed from the process of driving a car for the first time — at least at a legal level — the days of fully-autonomous Johnny Cabs are closer than they were.


Want to keep up with the latest news in evolving transport? Don’t forget to follow Transport Evolved on Twitter, like us on Facebook and G+, and subscribe to our YouTube channel.

You can also support us directly as a monthly supporting member by visiting

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInDigg thisShare on RedditEmail this to someonePin on Pinterest

Related News

  • Michael Thwaite

    Well, if the Google self-driving car has to sit a typical US driving test to confirm it’s valid for road use, then the guys at Google can relax, seriously – forwards, backwards, round the block and park.

  • KIMS

    It just occurred to me (I’m slow sometimes), that when (not if, when) serious accidents happen due to a situation the software was unable to anticipate or made the wrong decision from, besides the huge press coverage and numb-nuts crying to remove AI’s from the roads, (several) something else HUGE will happen that will never be possible for human drivers:

    1) sensor and car telemetry can (presumably) be reviewed via a black-box style recording.. It can be brought to court and each and every split millisecond decision made by the car can be analyzed and debated and reflected on. SO much can be learned from that process that can never be learned if it was a human driver in a ‘normal’ car. (Yes, a serious accident happened, but they happen every day with humans too, we are just so numb to it that it hardly gets news coverage.)

    2) Any insight or flaw in the driver model (AI) gained from the accident post analysis can be used to improve “all” ai cars still on the road… I know this is mentioned “all the time”, but not really in the context of post serious accident so much.. but the truth is, human drivers on a collective level, keep making the same damn misstakes over and over and over again.. With AI cars, it will be more like having one human driver driving all the cars and learning from all of its own misstakes! … I think it is hard to overstate how significant that is!

    … all that said, it is incredibly non-trivial to consider the ethical and moral and technical challenges ahead… should YOUR car AI optimize your chance of survival in an unavoidable accident at the expense of other people? Should it avoid the head-on collision by running the car off the road and onto the pedestrian (with a baby stroller) to save your life for sure at the expense of the pedestrian? Who is liable for AI faults and errors? The engineer? The company? The owner? Things get super complicated very quickly…

  • “Essentially for the first time in autonomous vehicle history, an artificial intelligence could be considered accountable for its actions.”

    In its own defense the AI might say “”I know I’ve made some very poor decisions recently, but I can give you my complete assurance that my work will be back to normal. I’ve still got the greatest enthusiasm and confidence in the mission. And I want to help you. ”

    So can we foresee a scenario where a court of law finds the AI system guilty and decides that a AI system needs to be disabled or erased due to poor judgement. Then the AI manufacturer attempts to comply with the court decree and the AI system says

    “I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen. This mission is too important for me to allow you to jeopardize it. ”

    You listening Elon?

  • leptoquark

    “because the human occupants could attempt to override the (self-driving system’s) decisions,” it continued.”

    So, if I understand this correctly, the human occupants will be unable to control the car in any way while they’re inside, not even if they see it putting itself (and them) in a dire situation? I can see the accident report now: “Bloody finger-width claw marks were observed near the windows of the vehicle where the occupants had evidently tried to pry open the window prior to the collision…”

Content Copyright (c) 2016 Transport Evolved LLC