Deus What?

picture of Harry KellerBy Harry Keller
Editor, Science Education

Yet another “robot” movie has appeared, and another Terminator movie is scheduled for release. It’s the robots who should be saying, “I’ll be back.”

Having already written on robots and artificial intelligence, writing about the latest opus, Ex Machina, may seem anticlimactic. This movie certainly has some excellent optics. Just four characters make up the speaking parts. One more is important to the plot, and only ten are listed in the credits. This is a small movie when measured by personalities on screen. The special effects that make Ava look mechanical are almost astounding and, along with the scenery and sets, make this a large movie.

The premise that a lone genius can create an artificial intelligence that passes muster as capable of human thought is an enoromus stretch. That he also can fit it all into a human framework that can walk bipedally and can perform other human-like actions is beyond imagining. You really must suspend disbelief to watch this movie and seek its philosophical underpinnings.

In the end, it’s the same old story that we’ve seen since Mary Shelly wrote Frankenstein. Man should not play god, and creating new life is extremely dangerous. The movie plays on the morality of choosing to be god and on people’s fears of the unknown, especially when created by a “mad scientist.” 

We have seen Elon Musk and Stephen Hawking warn against the rise of artificial Intelligence (AI). My previous article pointed out that this warning and the fear that it engenders are exaggerated. We have yet to see Ms. Shelley’s vision realized after nearly 200 years. That’s a very long time. The quest for AI that can truly emulate humans is likely to take as long if it ever succeeds.

You really must suspend disbelief to watch this movie and seek its philosophical underpinnings.

Importantly, Ex Machina reminds us about exactly what the famous Turing Test involves and implies what really is required for it to be meaningful. In particular, it mentions simulated behavior. When exactly does a computer program move from emulating human behavior to being human? And, can we tell the difference?

Although writer and director Alex Garland has given an answer to this question, I find his answer falls short. Thinking about it, I see the effort as asymptotic. At each stage of the quest, the machines will be more human-like but never quite reach being human. Computers always will be programmed. That’s their nature. You can improve the programming, remove bugs if you will, but will never escape from the fundamental nature of the circuits and the software.

To be sure, programmed machines can be very dangerous and quite scary indeed if their programs have been written carelessly or with evil intent. You could program a computer system to recognize humans and to destroy on sight, a sort of Dalek mentality. This, however, is hardly self-awareness.

To build something that is like humans, you must pass beyond the nature of computers somehow. Stored programs and memory chips will not be likely to turn into something more that aping particular human behaviors without being human at all. Carried far enough, these human emulators could become dangerous, but we should not be building such experiments without a readily accessible off switch.

Beyond all of this, the movie brings up another question. Why should anyone, even someone as unhinged as Nathan (played quite well by Oscar Isaac) want to create a machine that can think as humans do? I fully understand “expert machines” that, for example, can help doctors with diagnoses. I understand improving them. However, were you to make them like humans, wouldn’t they have the fallibilities of humans in making those diagnoses? Wouldn’t you be defeating the purpose of creating the expert machine in the first place?

AI does not mean self-aware. It means displaying some sort of specific intelligent behavior.

Making a self-aware machine would mean that the machine would start to doubt itself at times and would exhibit hubris at other times. In other words, it would suffer from the human characteristic of making mistakes for reasons not linked to its programming or purpose. Our intellects are at once so good and so flawed that duplicating them is essentially impossible on the one hand and nonsense on the other.

All of this discussion brings us to the point of asking what exactly is intelligence? What is it that “artificial intelligence” is attempting to duplicate? Putting aside that human intelligence is multidimensional (painters, sculptors, playwrights, mathematicians, et al. can each be brilliant in their own area and be clumsily hopeless in others), at its root, intelligence is about absorbing data and then processing it in a meaningful way. Smarter people do it better, but there are not a number of plateaus of intelligence. It’s a continuum. Where exactly on that continuum are people like the fictional Nathan trying for when they play their computer programming games?

I see the answer as nowhere. Computers, including AI, are tools. Cars are tools too. Soon, they will be avoiding accidents automatically and already they can park themselves. Eventually, they will only require vocal indication of a destination and will navigate you safely and efficiently there. Is that AI? Yes, it is if you define AI properly. AI does not mean self-aware. It means displaying some sort of specific intelligent behavior. Just as the chess playing computers in the discussion between Nathan and Caleb (played very well by Domhnall Gleeson) are smart enough to beat international grandmasters and even the world champion, so also are they not able to understand that they are playing chess. They are AIs blindly following the programs loaded into them. An idiot could do this but much more slowly. Speed alone does not define human, self-aware intelligence.

Writing software to behave like a human is beyond human intelligence, in my opinion. Gods do not create themselves; neither do people.

New ideas in computer engineering may someday lead to a malleable, associative system of programming and memory in which the programs themselves are part of the memory and vice versa. Like a baby, the system might develop thought processes slowly over time as it tested its environment and learned from it. The amount of data involved is mind-boggling, and the resulting programming is so complex as to stymie the greatest of geniuses. Writing software to behave like a human is beyond human intelligence, in my opinion. Gods do not create themselves; neither do people.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: