Adjustable ethics at the wheel of a self-driving car

1 Comment

 

The age of the self-driving car may be nigh. The South Australian government has recently announced it will allow trials of self-driving cars on its roads. Numerous US states are planning on, or already, doing the same.

Self-driving carSelf-driving cars could save many lives. The Australian road toll remains substantial — 1193 deaths in 2013. Many deaths are caused by driver inattention, tiredness, being under the influence of drugs or alcohol, recklessness, and so on.

Computers, as we know, don't get bored, sleepy, drunk, or angry. They don't show off to passengers. They don't send texts on a freeway. Self-driving cars are, if we believe the advocates (and manufacturers), level-headed, infallible driving, well, machines.

But driving involves decision-making, and only some are purely mechanical. Others are ethical. How exactly will self-driving cars make those decisions?

Imagine, for example, you're in your self-driving car, travelling at speed on a highway. Suddenly an oncoming road train swerves into your lane and thunders towards you. You may just be able to swerve, but unfortunately five men are standing on the side of the road, and you will surely hit them. Should the self-driving car swerve, and probably kill five people, or stay the course and likely kill you (and maybe the road train driver)?

If you were driving, it seems likely your instincts would kick in and you would swerve, killing the five men but saving your own life. But you aren't driving. The car is driving, powered by algorithms written somewhere in California.

Those programmers will have to decide how the car's 'instincts' will work. Should it drive like a human, and protect the driver even where it harms others? Or should it be utilitarian, and sacrifice the driver for the common good where necessary?

There's undoubtedly something unsettling about computers deciding to sacrifice our lives for others. But maybe that's just too many viewings of 2001: A Space Odyssey speaking. Surely if we now have the power to overcome our base, selfish instinct for self-preservation and replace it with more noble values, we should do it.

But is it that simple? Self-driving cars will save many lives if they catch on. However, convincing people to buy them may be difficult. One way to really ensure self-driving cars never become popular is for it to become known that they are programmed to kill you and your loved ones if circumstances require. That isn't a feature any dealer's likely to mention alongside the Bluetooth-compatible in-car entertainment system and cup holders. Maybe self-driving cars should protect drivers at the expense of others just so they sell well, because this will save more lives overall.

And besides, is it always wrong to value your and your loved ones' lives above the lives of strangers? Often people regard parents as monstrous if they don't value their children's lives above everything else, including other adults' lives.

These questions don't have easy answers. And there are further moral dilemmas — should self-driving cars preference young lives? Should cars behave differently if there's a baby on board? How great a risk should be taken to save an animal, or property?

Regardless of what you think about these questions, it's a little troubling that their answers may soon be mandated by a bunch of programmers in another country. That's led some to suggest self-driving cars should have 'adjustable ethics settings'. While this sounds suspiciously like it could be a gag from The Hitchhiker's Guide to the Galaxy, it isn't.

The idea is that people have a say in what ethical decisions their car makes on their behalf. A user could tell the car whether it should assume she is worth more than her fellow human, whether it should protect the young over those who've already had a good innings, and so on. One imagines adjusting the settings might involve a rather tiresome and difficult questionnaire.

Patrick Lin of Wired calls this a 'terrible idea'. He says it 'merely punts responsibility' for resolving ethical quandaries to the consumer rather than the manufacturer. Lin seems to think manufacturers simply need to 'get it right'.

I don't want to bring down this brave new world with pessimism, but what if there's no answer we can all agree on to these questions? While we mostly agree on the basics — the Golden Rule and so on — philosophers and theologians over the centuries have had quite some trouble agreeing on detailed moral codes which provide certain and palatable answers to all our dilemmas in all situations.

Pending the discovery of universally-accepted moral legislation, perhaps the best we can do is put individual human beings in control, for better or worse. Self-driving cars might allow some people to overcome their instincts and more freely exercise their agency by making noble, selfless moral choices.

Probably others will make less noble, but understandable, choices. Either way, I'd rather live in a world where I'm beholden to the moral choices of my neighbours, not inscrutable edicts made in far-off corporate ivory towers, executed by automatons.


Patrick McCabe headshotPatrick McCabe is an Adelaide writer.

Self-driving car image: Marc van der Chijs, Flickr CC

Topic tags: Patrick McCabe, self-driving cars


 

submit a comment

Existing comments

Early last year, I was involved in a frightening road accident. There were two of us in the car, and I was the passenger. I learned that adverse events happen very, very quickly when driving, with survival instincts naturally taking control. No time to think about ethical considerations. Thankfully, neither of us were seriously injured. Our car was written off and our new 4WD has lots of safety features. And chastened owners.
Pam | 20 November 2015