By Harry Keller
Editor, Science Education
Artificial intelligence has appeared in a great many movies over the years, often as robots. The latest is Chappie, a movie that has been panned by a majority of critics but apparently enjoyed by quite a few movie goers.
Robots (or AI) have been good and bad. The first that I recall was Robby in the first science fiction (SF) movie to adhere to scientific ideas (of the time), Forbidden Planet. This 1956 movie starred Leslie Nielsen when he was still doing romantic leading roles. The character of Robby created quite a stir at the time. He was definitely a benevolent robot who was unable to harm humans. An immense computer system, the hidden evil element of the movie, served as a foil.
Most people remember HAL, the AI embedded in the spaceship of 2001, a Space Odyssey. This movie debuted twelve years later and showed how AI could be a force of evil. Few who saw it will forget the creepy voice of HAL (notably one letter apiece short of IBM alphabetically).
I probably will not see Chappie for several reasons based on the reviews and my viewing of the trailers. The concept of artificial intelligence rising to the level of human consciousness bothers me, not for religious but for scientific reasons. However, many students probably will see it if only because of its themes involving street gangs and defiance of authority.
The movie may not truly be exploring AI at all. It might be using the robot as a mirror for us. Like many such movies, it may be investigating whether a machine can have a soul in the religious sense. Can man play God and create creatures with souls? Because such speculations are so metaphysical, I will confine myself to the mere physical aspects of these portrayals.
Movie makers like to give their AI creations human personalities. By so doing, they advance the plot and engage the viewers. Can a machine have a human personality? Can a machine be completely human? These are questions to ask students of many ages. The obvious answer, “Anything’s possible,” dodges the question. Instead, put a time frame on it.
Will machines be capable of human thought in ten years? In twenty? By 2100? How will we even know if they reach that level? Is the Turing test sufficient? For teachers, personally, will a computer ever replace a teacher?
The last is a trick question because computers already can replace poor teaching, but don’t even come close to mimicking good teachers. This is the point of reference that you might choose. You also could try asking about reproducing the genius of Newton or Beethoven or Picasso. Computers and robots already have replaced people in some very repetitive and monotonous tasks as well as some very dangerous ones. Will the trend continue? How long and to what end?
I don’t see it happening within twenty years. The counter argument is that we are building ever faster CPUs and ever larger and more compact memories. Eventually, the argument goes, we’ll surpass the speed and capacity of the human brain. This analysis has one problem. Without software, those technological wonders are useless. Humans write software. Humans are very fallible. Writing a software program that really can duplicate human thought is an extreme exercise in bootstrapping. We mere humans will be unable to do it, in my opinion.
The rebuttal that many have seen is that computers will write their own software. It’s a sort of positive feedback loop whereby the computer writes a better software-writing program, which then writes an even better one, and so it goes with great rapidity. Once begun, only a few days will produce a computer program that can outthink any human who ever lived and all of the combined intelligence of the billions of humans on the planet. The typical doomsday plot now has the AI seeing that humans are a scourge on the planet and determining to exterminate them.
Someone still has to write that original program. Sure, machine learning, as a discipline, has come a long way, but it still is not capable of this sort of creative thought. It can adapt but not build up to higher planes of functionality.
My analysis does not preclude apparent human functioning over a limited scope. I can imagine a computer programmed to make small talk, for example. Computers certainly can play chess and diagnose illnesses, albeit imperfectly just like doctors. These examples are not real thought. They are examples of the computer as a tool harnessed by humans for a task.
Certainly, the prospect of sentient computers is frightening to many — to most, I’d suspect. It may be comforting to some who seek order in a chaotic world. It’s definitely scary enough that Stephen Hawking and Elon Musk have warned against it. Yet, it cannot happen at all, even if it were possible, without the assistance of that old standby of moviedom, the “evil genius.” Someone can merely pull the plug if not thwarted by a misguided enabler. Furthermore, the person behind the computer has most certainly put code in the program to support his/her aims, to duplicate his vision. He will certainly guide the computer just as we guide our children, reducing its scope considerably.
These new machines will be designed from the chip-level up to be slaves to humans. Just as Isaac Asimov so famously coined the three laws of robotics (and then proceeded to use his fiction to find holes in them), so will future robotic efforts always seek to make robots and other AI subservient. Do not mistake these creations of silicon and metal as beings of higher capacity who have been enslaved to bring them into this low state. They do not have the inherent capability to reach a higher state of existence. Their software determines what they are, and software for true self-awareness is far, far away — perhaps to eternity.
At their fundamental roots, computers are truly stupid. Software engineers use computer code to tell them what to do. They will follow those instructions no matter what. They have no more ability to do otherwise than the sea has to climb Mount Everest — less, I’d say. Because humans have written incredible software, building upon earlier efforts in a pyramidal construction over decades, people see what to them are remarkable attributes of these machines. They can talk and even understand your speech to some extent. They can control oil refineries and forecast weather — more or less. Behind every single one of these software-controlled appliances, for appliances is what they really are, is a person or persons who are responsible for what they can and cannot do.
Don’t worry about intelligent machines so much as worry about malevolent people who will use anything they have to further causes that harm more people than they aid. The current crisis over the so-called Islamic State is a case in point. It shows how much harm can come from putting powerful tools into the hands of anyone. Imagine if these people held nuclear weapons and delivery systems.
We cannot halt the advance of technology. We must change our world so that those who are sociopaths and psychopaths are not enabled to obtain powerful tools to harm others.
Those who have read many of my writings will know that I think we can help to avoid these problems with good education for all. Eliminating worldwide poverty will also be important and may be an outcome of much better education worldwide. You know that I do not simply mean more schools. Quality is important here. We must teach our young across the world good thinking habits. I have dedicated my late life to this cause in one small way. I salute every teacher and entrepreneur who has the same goal.