Stephen Hawking thinks that Artificial Intelligence could spell ‘the end of the human race’. Well, that’s not really that hard is it? It’s only six words and pretty easy ones to spell. However, he is making a serious point that will become an issue eventually as our computers get more advanced and robots become a big part of our culture.
You may have seen the ads where a Ford Focus parks itself without human help. However, that is nothing to what is coming in the next fifteen years or so. I predict we will soon have electrically driven vehicles that will be driver-less automatons that you simply say where you want to go and the vehicle does the rest; just as in the Sci-Fi films like Total Recall. Too far fetched? Not at all. The proto types are already being made and with a few tweaks will be available within a few years.
This will create significant problems for many people. Will everyone have to have one to make sure our roads are accident free? Some people don’t like automatic transmission cars now so they will be apoplectic at the thought of not being able to control the vehicle at all. Personally, I love automatic cars and not having the hassle of clutch control and having to change gear manually. There is nothing manly or macho about having to do that although some men try to pretend there is.
Can you imagine autonomous cars, presumably electrically powered. competing with each other to race away from the traffic lights? Or shouting at the one in front that is keeping to the speed limit meticulously?
Will governments be able to force electric automated cars upon us? Will manufacturers go along with it or have ‘sweeteners’ to induce them? Well, the way some governments are acting now in forcing undemocratic actions upon us in schools and health care does not auger well for that not happening. If the car manufacturers cease producing petrol driven manual vehicles, then what choice would we have?
There would be many other spin offs too as I see it. Without the need for petrol engines the Middle East will no longer control economies and will see their wealth diminishing. There will be less involvement politically in those areas and fanatical groups will not get funds so easily. America would undergo vast change too as the oil fields become less important. However, it’s easy to imagine what corruption and chicanery would take place to stop oil companies and related businesses losing their power and profit.
On the plus side you would not need companies like confused.com as motor insurance would be practically redundant. Think of the money saved! You might need robotic insurance though. So, there are many problems ahead before fully automated electric cars are the norm.
Let us imagine that people who love the environment and want the best for our planet have their way and all vehicles are electric, fully automated and driver-less. Accidents will be rare because all speed limits will be obeyed and computers will use sensors to make the correct decisions. Or will they?
Completely automated cars pose a moral dilemma and I am not certain that everyone involved has thought this through thoroughly. In an emergency, with limited options, how will the car computer choose how to protect or save passengers, pedestrians or other road users and in what precedence? For example, on a narrow cliff pass and where a person, say a child, falls into the road from another vehicle giving no time to brake sufficiently, does the computer car decide to swerve and thereby go over the cliff killing the passengers, or carry on braking knowing the child will be run over?
Or, imagine a robot car faced with a pedestrian running out in to the road and there is no time to brake sufficiently. The car has to swerve or continue on. On the offside of the road is an oncoming car and to the nearside is a group of pedestrians standing on the pavement waiting for, let’s call it, an auto bus. Who does it choose to save? In a fraction of a second it will have to compute the degree of injury or death to each pedestrian or passenger. Maybe take into account the life expectancy of each person in each category; whether pregnant or not or other family values. Does it decide that the person running into the road is negligent and run them over? See the problem? In certain situations, who does the computer decide to save? How will a programmer create a programme to do this? Who will be responsible for building in morality into the computer brain and whose morality will it be?
There is a school of thought that ‘the moral action is that which causes the maximum happiness to the maximum number of people’; a form of unilateralism. Therefore, should the car be programmed just to save the maximum number of people whether passenger or pedestrian, child or adult?
And what would be the effect on Case Law for claims of compensation? When I studied Law, we were taught that an error of judgement is not necessarily negligence. So, could a computer decision be immune from such claims because its programmer made an error in the code to help it make a decision? Again, going back to principles of English Law there is the principle of Volenti non fit injuria. (A willing man suffers no injury.) Will passengers in robot cars deemed to be automatically running the risk? If you purchase such a car or ride in one would you be deemed to be accepting you have waived any rights to compensation?
In a 1942 short story Runaround, Isaac Asimov proposed three Laws of Robotics : A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the first law. A robot must protect its own existence as long as such protection does not conflict with the first or second law. Later, Asimov added a fourth, or zeroth law, that preceded the others in terms of priority: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
I think there is another law which should be incorporated into any programming of AI. It should not be allowed to be self aware or conscious, i.e. it should not be sentient. To ignore this would probably mean the slave becoming our master. For a robot or AI endowed with super-human intelligence how difficult would it be for it to figure out how to access and revise its core programming? As scientist Tim Helm inferred, isn’t it dangerous to create what philosophers might label, ‘beings of moral significance’? We have been created with a consciousness that instils ethical liability. Can programmers really do that accurately to replicate a human brain? We have the free will to choose which decisions to make. That’s why we are not perfect. To be that we would not have free will. Can a computer really be altruistic?
If robots and A.I. proliferate, could humans become a sub species? I don’t think so. If our programmers are ethical people then surely they will find a way to prevent this…or maybe get an AI robot to solve it for us? On the bright side, A.I. could mean an end to wars, disease and world poverty. We could revert to ancient classical Greece with robots as our slaves doing everything needed to produce food and products while we sit around philosophising and playing musical instruments.That world might be safe but a bit boring?
And I wonder what the media would make of it all? What virtues would they find to ‘celebrate’ in a cyborg or a robot? Would they make ‘celebrities’ out of some of them? In a world that seems to make a celebrity out of just being on television in so called reality programmes and people of limited talent are made famous; where soap operas are alleged to reflect real life (they don’t by the way) would cyborgs and robots have to have a programmed personality? How would they act? A bit better than some so called stars? Actors wouldn’t be called ‘wooden’ any more. They might be described as metallic or would that be taking the Metal Mickey? Maybe Talent Shows where all the contestants sound like Stephen Hawking? Which is where we came in.
So no, I don’t really think the human race will cease because of Artificial Intelligence. It might cease because of Human ‘Intelligence’ though if warning signs about the ecology of our planet are not heeded or political morality declines even further.