The Age of the Driverless Car will Require the Robot Brain to Make Life and Death Decisions

July 29, 2015 – Today’s guest contributor is new to 21st Century Tech blog. His name is Milton Herman. Milton is a writer who lives in Phoenix, Arizona. He also calls himself a content strategist, a recent web term whose definition includes the planning, development and management of Internet content. Milton’s background is sports journalism but his treatment of driverless cars and the future bears no resemblance to a baseball report. Enjoy and please feel free to send us feedback on this and any other guest writer contribution.


Chris Urmson, director of Google’s Driverless Car program, said in a recent Ted Talk that engineers have been working to eliminate the biggest “bug” with cars since the first model was crashed into a wall. The bug? Drivers. We know the safety statistics on autonomous cars; Google’s fleet has navigated more than a million miles and occurred 11 minor accidents (light damage, no injuries).

But what do we not know? For starters, we don’t know how laws, regulations and critical decisions will be handled as driverless cars begin to replace our current driving infrastructure. The U.S. National Highway and Traffic Safety Administration (NHTSA), the organization responsible for American laws ensuring safety on the nation’s roads, admitted that it’s years away from creating driverless car regulations. Yet from California to Britain, autonomous vehicles are already legally on the street. Here’s a look at both the status quo as well as what we believe the future of laws and morals will be in this new autonomous driven era.

Current Laws

There are more than 75 U.S. Senate, House and committee bills floating around the 50 states in preparation for driverless cars on the road. These are loosely followed by Stanford’s Center for Internet and Society database. The bills, which range in status from indefinitely postponed to enacted, vary greatly. Here are a few examples:

  • Michigan has made the car manufacturer not liable if a vehicle is modified in any way.
  • Tennessee has prohibited local governments from restricting the use of a vehicle solely on the basis of it being equipped with autonomous technology.
  • Louisiana has allowed for driverless vehicles to be on the road and permitted testing and research.
  • Nevada is permitting the use of a wireless device in any lawful autonomous car. A reverse texting and driving law if you will.

This desultory set of laws shows the disorganized nature of politics. It also shows just how far behind the policy is from the technology. More importantly, these laws are not remotely close to answering the moral questions that our society will have to face. Whichever way you look at it, relying on robots that drive creates issues only sci-fi films and novels have addressed.

Should a Robot Decide Who Lives or Dies?

Let’s get hypothetical in the name of science. The year is 2225, autonomous cars are the norm. On a rainy, icy day an autonomous vehicle is going down a hill and begins to slide on a patch of black ice. Normally the car would have accounted for this, but on this day a group of kids are playing around and have altered an ice patch before the car can recalibrate. As the car slides toward traffic and pedestrians it faces a decision. Slam into a car with a family of four (with its crash system predicting a fatal crash) or swerve into a pedestrian. The moral implications makes everyone uneasy. Furthermore, imagine if the pedestrian is a world renowned doctor working on a vaccine that could cure a deadly disease.

Many would recite Isaac Asimov‘s first law of robotics. Asimov was a science fiction writer in the 1940s and not surprisingly did much more critical thinking on the issue compared to our stagnant policy makers. His first law states a robot may not injure a human being or, through inaction, allow a human being to come to harm.

The situation above is hypothetical but is also inevitable. Human error will cause driverless car error, which may cause human harm. Do we sacrifice human lives in creating a dependence on autonomous driven cars on our roadways? Leading ethicists and philosophers are being hired by Google and car manufacturers to answer this question. Dr. Chris Gerdes runs a driverless car testing lab for companies like Toyota and is recently quoted in Bloomberg stating, “There’s no sensor that’s yet been designed that’s as good as the human eye and the human brain.”

This means the following situations will likely occur:

  1. In all life and death, high consequence situations control will be put back in the hands of the operator. Although with the chance of people working on laptops, texting and sleeping during drives this could be a risky solution. It also begs the question — will we even know how to drive? The days of studying for a test online could be replaced with studying autonomous car manuals and rules.
  2. A high risk operation center might be established. Experts would be responsible for making these decisions as they use technology to identify deadly situations before they happen. Another seemingly unforeseeable situation, but one that might be closer than you think.

Gerdes is currently testing automated vehicles programmed to follow ethical rules. The autonomous cars are learning to make split-second decisions, such as disobeying traffic laws when it makes sense such as when passing a cyclist or when coming upon a double-parked car partially impeding the right of way. Continuous testing like this is essential to establish laws and regulations for the future of driving.


Len Rosen lives in Toronto, Ontario, Canada. He is a researcher and writer who has a fascination with science and technology. He is married with a daughter who works in radio, and a miniature red poodle who is his daily companion on walks of discovery. More...