Driverless cars. It’s the stuff of fiction made real. And, all going to plan, they’ll be on the roads of the UK within the next 2 years – thanks to plans for a comprehensive trial of autonomous driving on the British road network. The success of which could result in their full launch to market by 2020. It follows Chancellor Phillip Hammond’s inclusion of driverless cars in his £270 million investment announcement for new ‘disruptive technology’ (yes, that phrase again!).
So, they’re real, and they’re happening (soon).
Let’s project forward, for a moment. You’ve taken the plunge and decided to embrace the future in all its technological glory; hanging up the keys to your old Cortina in favour of a shiny new driverless car. Ready to sit back and enjoy the view as your vehicle, powered by data and software coding, quietly takes control of the wheel to ferry you to your destination.
On your maiden voyage, the car heads merrily along the picturesque A road when suddenly, as you pass by some road works, a child runs out in front of you, too close to break to a halt. Evasive action is required, the scenario presenting itself with three choices:
- Swerve left, and crash into the trees
- Swerve right and plough into the three labourers in the road works
- Continue on and hit the child
Not great options, I grant you. A basic choice between death or serious injury to you, three workers, or a child.
Of course, such a split-second decision would be tough for anyone driving a car, the outcome likely to come down to instinct. But what about the car that’s controlled by computer?
What data will be used to make such a decision?
Is the car programmed to value the life of the passenger, above all else? Or is it programmed to identify a child, and protect the younger person in the scenario? Maybe it’s designed to make decisions based on cold, hard, and logical numbers, risking the life of one person, to save four others.
It certainly presents something of a moral dilemma, and an issue to be grappled as we enter a new era of ever more sentient technology.
OK, so this kind of ‘what if?’ scenario is something of an extreme example to make a point. Something not lost on the programmers at the cutting edge of this tech; who argue that these scenarios would only occur because of a mistake made earlier in the decision-making process. Their job being to ensure those mistakes don’t happen, eliminating the risk.
Fair enough, but as any risk assessor will undoubtedly say, risk can certainly be reduced, but it can’t be eliminated.
Accidents will happen. Maybe not played out as per the example above, but something will go wrong, at some point. Somehow.
And when an accident does occur, who pays? Who bears the responsibility and liability, in the eyes of the law, and the minds of the insurers?
The Vehicle Technology and Aviation Bill
In the face of such ethical, technical, and legal complications, new guidelines have been put forward as part of the Vehicle Technology and Aviation Bill, which has recently gone before our soon to be dissolving Parliament.
In it there’s a degree of clarity regarding situations where an autonomously driven vehicle is deemed to be the responsible party for an accident.
Namely, that the insurer of that vehicle pays out.
On a simplified level, the vehicle owner would be required to have vehicle insurance just as they would for one of those old-fashioned cars, that you have to drive yourself.
And while the bill has been backed by the industry, one cannot help but see potential complexities that will need to be ironed out along the way.
For instance, there’s a stipulation that decrees that liability may not fall upon the insurer if it’s deemed the vehicle was driving autonomously when it was unsafe or inappropriate to do so.
Seems a little vague. Who deems when it is, and is not safe to allow for autonomous driving?
Furthermore, we’re talking about vehicles being powered by smart technology and computer programming. Which presumably will mean they share the same kinds of vulnerability to software failure, or even hacking and cyber-crime, similar to other Internet of Things (IoT) devices.
Does that mean a policy is void if the owner fails to update the software? What about if the car is hacked, resulting in an accident causing malfunction?
The age of the driverless car is approaching, and in many ways (not least for those who remember Knight Rider) that’s an exciting breakthrough in futuristic technology. Presenting an opportunity for safer, slower roads, for more productive use of travel time and decreases in instances of fatigue at the wheel.
But their development poses questions of an ethical and legal nature. Questions that are slowly being identified and addressed by insurers and legislators. But with complexities and charcoal grey areas that may yet prove problematic by the time we see that self-driving fleet car cruising past us on the M6.