Will self driving cars endanger human lives?
Self driving cars are coming, and they are coming fast (but not exceeding the speed limit, of course!). This has provoked a frenzy of concern about how they will behave. More specifically, will self driving cars endanger human lives? I find this ludicrous, given how many people are killed by human drivers. But that said, there is still a compelling argument that we need a code of ethics for robotic cars.

And that was true even before millions of people decided it would be a great use of their time to text-while-driving.
For example, 30-year-old Christopher Gard killed 48-year-old Lee Martin in August 2015. Martin was competing in a ten-mile cycling time trial event in Bentley, Hants, England. Gard killed him by mowing him down at 65 mph with his van, while texting. And it gets worse: Gard had EIGHT previous convictions for driving while texting. Why did he still have a driver’s license? Well gee, six weeks before killing Martin, Gard had “promised” the magistrate that he would never again text while driving. Well, he promised, right – what else could the magistrate have done?

That was a terrible and unnecessary tragedy for that family, but Martin’s family is not the only one that is mourning. Last year alone, 35,092 people were killed on US Highways (up 7.2% from the previous year). Self driving cars will have much faster reflexes, will obey the law, and will never drink and drive, or text and drive. I would stake my life on a bet that if all human-driven cars were replaced by self driving cars tomorrow, the death rate on the roads would instantly plummet.
Self driving cars will dramatically decrease the carnage on our roads
Replacing human drivers with self driving cars would likely take us to the point where a traffic fatality would make international headlines. Oh wait, that already happened. On 7 May 2016, Joshua Brown became the first man to be killed by a robotic car. He was killed when his Tesla Model S, in autopilot mode, failed to distinguish a white tractor-trailer crossing the highway against a bright sky.
This tragic collision made global headlines, with many lamenting that we just cannot trust the complex job of driving to mere robots. Sadly, in the five minutes it took anyone to read the article and start to worry, thousands more people were being killed by human motorists. Of course, that did not make the headlines, because those deaths are just normal, to the point where we actually expect them.
In fact, if there were a day when nobody in the world was killed by a car, we would all be utterly astonished. Such a day would make global headlines, and would be spoken of in hushed, reverential tones for decades to come.
In the meantime, Google self driving cars completed one million miles in June 2015, with only 12 minor collisions and no fatalities. Eight of those 12 collisions were caused by other cars rear ending the Google cars.
The first motorist who killed a pedestrian
It might come as a surprise to know that there was a time when a motorist killing a pedestrian could make international headlines. This happened the very first time a motorist – Arthur James Edsall – killed a pedestrian. Edsall said he was driving at 4 miles per hour when he struck 44-year-old Bridget Driscoll, the mother of three young children, in London on 26th August, 1896. Witnesses said he was driving recklessly, “like a fire engine.” There were also allegations that Edsall had jerry-rigged the vehicle to drive at 6 or even 8 miles per hour, but these shocking allegations were never proven.

At the time, coroner Percy Morrison said that he hoped that “such a thing would never happen again.” Of course, his hopes were cruelly dashed, given that by 2010, more than half a million more people had been killed on UK roads.
It will come as no surprise to hear that Edsall faced no sanction, as the death was deemed to be “accidental.” This remains the default assumption in all car-related deaths today. That’s why we call them “accidents,” not “crashes.”
People were outraged by that first death, but we didn’t stay that way. We just got used to the death and carnage.
We have come to accept that the leading cause of death for our children is death-by-car, rather than smallpox. After all, cars are so convenient, right?
Not everyone in the world accepted death-by-car as the price of convenience
In the Netherlands, the people rose up in the 1970s, objecting to the slaughter of their children, and demanding safe routes for people to walk and cycle without being in mortal danger of motorists killing them. The pressure group “Stop de Kindermoord” (“Stop the Child Murder”) highlighted the number of deaths caused to children by motorists, and campaigned to reduce these deaths.

The outraged citizens influenced the Dutch government to re-emphasize building of segregated cycle paths, and it was achieved at astonishing speed. Unfortunately, all of us non-Dutch humans just accepted the child slaughter (and all the other deaths) – and anyone who says people-not-in-cars deserve better is accused of being a moronic utopian.
Given all this, it is impossible that robot cars will be worse than human drivers, so death-by-car is certain to decrease with the advent of more self driving cars.
Ethics for robotic self driving cars
However, there is one issue we do need to examine, and that is robotic ethics for self driving cars. Bill Ford is the great-grandson of Henry Ford, the man who first mass-produced these ever-so-convenient vehicles with those unfortunate side effects (traumatic injuries and violent death). Ford is now executive chairman of Ford Motor Co., which is working on getting self driving taxis on the road within five years. He’s keenly aware of the ethical issues (which is a relief), and has pointed out that society at large is going to have to agree on a set of ethics for self driving cars.
“How do you want these vehicles to behave? Whose lives are they going to save?” Ford asks.
Those are excellent and important questions. Anyone who is a sci fi fan (or a Will Smith fan) knows that these are the three classic laws for robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Isaac Asimov came up with these laws for fictional robots. As it happens, these three laws would go a long way towards constituting a code of ethics for robot cars, too. Especially law no. 1.
But with great power comes great responsibility. Self driving cars have the same terrible power to inflict death and carnage as regular cars, which means they will need to be held to a much higher standard of ethics than robotic butlers, for example. A self driving car taking a short cut through a Sunday school picnic is much more serious than your robot butler spilling the tea or burning the toast.
Take a situation in which a self driving car is about to be rear-ended by a semi-truck (driven perhaps by a human being who has fallen asleep at the wheel after driving for 14 hours – something that would not happen to a robot). In the car is an entire family, including three children. The only way for the robot car to evade the truck and save the family in the car is to veer onto the sidewalk – which unfortunately is occupied by a crocodile of preschoolers on an outing. Which choice should the self driving car make? Who should it kill, and who should it save?
It going to required very advanced programming to bring robots up to speed with humans on this one. Despite our appalling record, we human beings are usually very good at these split second decisions. I’ve been in one myself, and recall saving my family from a high-speed head-on collision by taking lightning fast evasive action (faster than I knew I was capable of reacting). Luckily, there were no preschoolers on the side of that highway.
We humans (usually) understand the value of human life at a very deep, primal level. Faced with a choice of hitting a horse or a stroller with our car, we are always going to instantly choose the horse (even though it’s going to do a whole lot more damage to our car, and very likely injure ourselves).
Of course, humans are not quite as logical as computers, and our ethical choices can get fuzzy at times. For example, some people’s judgment is sometimes clouded by “us”” and “them” hatred, so that for example a cyclist is regarded not as somebody’s son, but rather as something that is “holding me up” – and hence it is not so necessary to avoid hitting him. In extreme cases, say for example where the cyclist has the temerity to be in the middle of the road, the cyclist can actually become the target of an enraged motorist.
This situation is not helped by the fact that our legal system sometimes seems to have a similar inability to fully grasp that cyclists are human beings. How else do you explain a $750 fine for paralyzing a young man? 32-year-old Jared Fenstermacher of Iowa was cycling cross-country to raise money to fight cancer. A distracted driver driving an uninsured pickup truck mowed him down from behind. As Iowa apparently has no laws against running down a cyclist, the only thing the driver can be charged with is following a cyclist too closely, which attracts a maximum fine of $750. Jared’s parents are understandably outraged.
We have grown to accept human motorists killing thousands of people every single day. But are we going to let self driving cars kill even ONE human being? Hell no, we must have a code of ethics to make them behave the way we wish WE would behave.
Before we get around to making laws about ethics for robotic cars, we might want to think about passing a few laws reflecting the fundamental ethic that cyclists are human beings too, and therefore motorists are not allowed to run them down?
Check Out Our Most Popular Posts! | ||
Did you enjoy this post or find it helpful? If so, please support our blog!![]() We write this blog because we love cycling. But we also need to earn a living, so we would appreciate it very much if you click through to one of our reputable affiliates for your online shopping. We are proudly affiliated with Amazon, which sells pretty much everything, and has outstanding shipping and return policies. When you buy from our affiliates we make a small commission, and this is the only way we earn any income. Plus, it costs you nothing at all - a real win/win situation! We here at Average Joe Cyclist do not receive any information AT ALL about who you are, where you live, or what your dog's name is. Buying through our Amazon links is simply an anonymous way to thank us for our efforts, like tossing a few coins in a tip jar. Except that it is Amazon who tosses the coins, not you! | ||
You will find that many widely repeated statements about autonomous vehicles can be attributed to very narrow perspectives on self-driving cars and a lack of understanding for the nature of the global, distributed innovation process which drives this technology forward.
You’re very right Joseph!