Source: 21st Century Wire

As it turns out, Google ‘Self-Driving electric car is not as ‘idiot proof’ as they thought.  It’s been causing accidents. Why police are remaining so tight-lipped about this trend is unknown, but there could be a classified ‘DARPA-like’ aspect to this new tech.

It’s all been a bit hush-hush on Google’s end who want the public to believe that their driverless cars are no worse than human drivers behind the wheel.
Liability

This latest revelation also brings up the issue of liability. If there is a fatal accident caused by a robotic software or hardware glitch, then who is legally responsible? Google?

Google already has hundreds of millions invested in this technology and product line, and they want to bring their new project to market very soon – within the next 5 years, so expect general curiosity and scrutiny to increase between now and then…

(Steve Jurvetson)

The Switch

Four driverless vehicles — three from Google and one from Delphi Automotive — have been involved in accidents on California roads since the state began approving them for testing in public last year, the Associated Press reported Monday.

Nobody was hurt in the crashes, and only two of them took place when the cars were being controlled by a computer. But the accidents are sure to raise questions about the safety of driverless cars as they become more common. Unfortunately, by California law, the incident reports are under lock and key. And the companies did not release many details either. While this practice might protect the experimental programs, it robs consumers of critical information about accidents involving driverless cars, hurting the very future that these companies are trying to build.

Proponents of self-driving cars argue that computer-driven vehicles can help improve automotive safety by reacting more quickly to oncoming dangers and keeping a better eye on the environment, reducing the risk of driver error. But right now, the public lacks objective data about whether that’s true in practice or even potentially true.

Earning drivers’ trust is going to be one of the biggest challenges for driverless car manufacturers. Giving up the steering wheel to a computer just won’t come naturally to many people. Nor will the prospect of having to share the road with machines that can make their own decisions.

Google and Delphi have said that the driveless cars were not at fault in any of the recent accidents. The crash involving Delphi’s vehicle, for instance, occurred when it was hit by another driver who had “traveled across the median,” said Kristen Kinley, a spokesperson for Delphi. At the time of the incident, the driverless car was under human control.

“Safety is our highest priority,” said Google in a statement. “Since the start of our program six years ago, we’ve driven nearly a million miles autonomously, on both freeways and city streets, and the self-driving car hasn’t caused a single accident.”

Add it all up and the message seems pretty clear: The autopilot was not the problem.

But what would really bolster people’s confidence is if the companies could prove that the autopilot performed well at preventing or avoiding a crash — not merely that it wasn’t the cause of a crash.

In two of the crashes, according to the AP, the driverless car features were engaged and no human was behind the wheel. These are the ones we ought to focus on. Understanding exactly how the self-driving cars behaved under these conditions — and in similar situations to come — will be key to showing whether driverless cars really are better or safer than humans behind the wheel.

Imagine a world in which driverless cars are the only types of car on the road. In this hypothetical universe, driverless cars are completely safe. Each vehicle behaves predictably according to its programming, so the interactions between those cars become predictable events as well. In this universe, the inattentiveness, speeding, road rage and the half-dozen other reasons accidents happen today will have been eradicated.

The world we inhabit today is far messier. For the foreseeable future, self-driving cars will have to respond to all the crazy things human drivers do, such as cutting other people off, texting behind the wheel, or driving drunk.

If the police reports show there was nothing the autopilot could have done to prevent a crash, that’s a clear point in favor of the manufacturers — especially if the reports also suggest that a human driver would have fared no better.

But suppose the crashes involved a head-on collision with an 18-wheeler whose lights were flashing and horn was blasting, and the driverless car missed all the signs. That would be a big problem.

The fact that we don’t really know, and that the state of California won’t release the reports, could become a setback for the adoption of driverless cars. We are left with voluntary disclosures by individual manufacturers to fill the gap. On Monday, Google published a post on Medium explaining that its cars have been in 11 “minor” accidents, 7 of which occurred when another driver rear-ended them. The blog post also highlights cases where the autopilot successfully avoided crash risks even as other drivers were behaving dangerously in the situation…

Continue this story at The Switch

Sign up on lukeunfiltered.com or to check out our store on thebestpoliticalshirts.com.