Science fiction is full of stories of killer machines and the rise of the robots which often gives us this overblown idea that the technology of the future will be malicious and out to end the human race. We’ve even had people warning about the dangers of the rise of artificial intelligence and how this will come to be mankind’s undoing. The fantastical aside, there is something we do need to consider when we are advancing our gadgets and devices; should a machine ever be ‘allowed’ to kill a human?
If you’ve read anything by Isaac Asimov you’ll be familiar with the laws of robotics. These are a list of unbreakable rules, all be they made up, that prevent an AI from harming a human or by omission of action, allow a human to be harmed. Seems straightforward enough, and very logical, but I want to go a bit simpler for this post. Lets take examples that might come up in the very near future.
Take for example self driving cars, a topic I’ve discussed previously and I’m hugely in favour of. There have been some high-profile incidents in the last few months where these cars have been involved in accidents. Sometimes this has been human error, other times its wasn’t so clear-cut, and once in a while the technology gets the blame. When this happens there is an inevitable out cry from people who are against this technology that we shouldn’t take the control away from humans. Without getting into my usual argument of the number of crashes caused by human error, or the potential for mistakes by a person, lets just assume for the moment that I firmly believe that in the long run autonomous cars being the norm will lead to much safer roads eliminating fatigue, drunk driving and just plain carelessness. Instead lets just decide that we’re not going to stop the march of technological progress and they will end up being the norm in 10 years time. What then?
We’ll here’s my question. In a no-win situation, do we permit the machine to make the decision to actually kill someone if it’s called for? So why might this happen? Well let’s take an example I discussed with someone recently. Supposing a situation arises where a self-driving car turns a blind corner onto a street where people are walking illegally on the road. It’s speed is too much to stop before hitting anyone and its on-board computer works out that if it swerves to the left it will hit a young man, if it goes to the right it will hit a group of 3 and if it does nothing it will kill all four. You may have heard this example before in one form or another, but in this case its a logical mathematical decision as opposed to a philosophical one as the machine has no concept of morality.
Most people before hearing this example argue that a machine should never be allowed to make a decision that would take a life, but in this instance this means all 4 people are killed. The lesser of the evils is to make the turn into the direction that minimizes the loss of life. If this seems callous remember that this is the job of triage nurses and field medics on a daily basis. Wouldn’t we want the machine to at least try or the best option of a bad lot? Or do we, wanting to keep our control over our creation, allow 4 to die rather than granting the machine the ability to override the primary directive, don’t allow a person to be harmed.
All of a sudden I’m leaning towards letting the machine make that decision. After all, the computer can logically and dispassionately make thousands of calculations and make judgments rapidly that the human brain may not be capable of in the moment. It could become the ultimate form of triage. By taking the emotion out and using every piece of available information the machine could potentially make the best decision in an impossible situation. I have no doubt there’d still be major criticism afterwards and the obvious question of ‘why couldn’t it do something else?’ will come up, but let’s just assume these were the only options. Would you want a split second decision like this make emotionally or logically.
Now a counter point. What about those times when people believe things should take into account the emotional factor. If it was a child on one side there might be a very different decision process on the part of a person that a machine may not account for. Someone at the beginning of their life, untold potential against, for example 2 older people in the sunset of their lives. The possibilities and permutations are endless but the bottom line is, I’d still rather the machine be allowed to try the ‘least worst’ option as opposed to doing nothing and causing the death of everyone in this case. There’s nothing to be gained by restricting its ability to use a logic gate to solve this problem.
So many books, films and television shows have touched on this topic in one form or another, but there was always the sense that this idea was years off or would remain in the realms of fiction, but as AI, deep learning and technology in general has advanced, this is something that needs to be thought about today and no longer just in the abstract.
There are those out there who push back on scientific progress removing the human factor and even more who believe that this progress should not be made at the risk of human life, but we need to remember historically that progress always has risk attached. From getting man to the moon to medical advances, there is always the chance that someone could be hurt or killed yet these are not looked upon in the same way. Of course we need to minimize the chances that humans will be affected, but there can be no progress without risk, no reward without adversity and while the examples above such as driver-less cars may seem now like a luxury, the goal is ultimately safer roads and less risk to life,