Can computers think?
The case for and against artificial intelligence
Artificial intelligence has been the subject of many bad '80's
movies and countless science fiction novels. But what happens when we
seriously consider the question of computers that think. Is it possible for
computers to have complex thoughts, and even emotions, like homo sapien? This
paper will seek to answer that question and also look at what attempts are being
made to make artificial intelligence (hereafter called AI) a reality.
Before we can investigate whether or not computers can think, it is
necessary to establish what exactly thinking is. Examining the three main
theories is sort of like examining three religions. None offers enough support so
as to effectively eliminate the possibility of the others being true. The three main
theories are: 1. Thought doesn't exist; enough said. 2. Thought does exist, but
is contained wholly in the brain. In other words, the actual material of the brain is
capable of what we identify as thought. 3. Thought is the result of some sort of
mystical phenomena involving the soul and a whole slew of other unprovable
ideas. Since neither reader nor writer is a scientist, for all intents and purposes,
we will say only that thought is what we (as homo sapien) experience.
So what are we to consider intelligence? The most compelling
argument is that intelligence is the ability to adapt to an environment. Desktop
computers can, say, go to a specific WWW address. But, if the address were
changed, it wouldn't know how to go about finding the new one (or even that it
should). So intelligence is the ability to perform a task taking into consideration
the circumstances of completing the task.
So now that we have all of that out of that way, can computers think? The
issue is contested as hotly among scientists as the advantages of Superman over
Batman is among pre-pubescent boys. On the one hand are the scientists who say,
as philosopher John Searle does, that "Programs are all syntax and no semantics."
(Discover, 106) Put another way, a computer can actually achieve thought
because it "merely follows rules that tell it how to shift symbols without ever
understanding the meaning of those symbols." (Discover, 106) On the other side
of the debate are the advocates of pandemonium, explained by Robert Wright in
Time thus: "[O]ur brain subconsciously generates competing theories about the
world, and only the 'winning' theory becomes part of consciousness. Is that a
nearby fly or a distant airplane on the edge of your vision? Is that a baby crying
or a cat meowing? By the time we become aware of such images and sounds,
these debate have usually been resolved via a winner-take-all struggle. The
winning theory-the one that best matches the data-has wrested control of our
neurons and thus our perceptual field." (54) So, since our thought is based on
previous experience, computers can eventually learn to think.
The event which brought this debate in public scrutiny was Garry
Kasparov, reigning chess champion of the world, competing in a six game chess
match against Deep Blue, an IBM supercomputer with 32 microprocessors.
Kasparov eventually won (4-2), but it raised the legitimate question, if a computer
can beat the chess champion of the world at his own game (a game thought of as
the ultimate thinking man's game), is there any question of AI's legitimacy?
Indeed, even Kasparov said he "could feel-I could smell- a new kind of
intelligence across the table." (Time, 55) But, eventually everyone, including
Kasparov, realized that what amounts to nothing more than brute force, while
impressive, is not thought. Deep Blue could consider 200 million moves a
second. But it lacked the intuition good human players have. Fred Guterl,
writing in Discover, explains. "Studies have shown that in a typical position, a
strong human play considers on average only two moves. In other words, the
player is choosing between two candidate moves that he intuitively recognizes,
based on prior experience, as contributing to the goals of the position."
Seeking to go beyond the brute force of Deep Blue in separate
projects, are M.I.T. professor Rodney Brooks and computer scientist Douglas
Lenat. The desire to conquer AI are where the similarities between the two end.
Brooks is working on an AI being nicknamed Cog. Cog has
cameras for eyes, eight 32-bit microprocessors for a brain and soon will have a
skin-like membrane. Brooks is allowing Cog to learn about the world like a baby
would. "It sits there waving its arm, reaching for things." (Time, 57) Brooks's
hope is that by programming and reprogramming itself, Cog will make the leap to
thinking. This expectation is based on what Julian Dibbell, writing in Time,
describes as the "bottom-up school. Inspired more by biological structures than
by logical ones, the bottom-uppers don't bother trying to write down the rules of
thought. Instead, they try to conjure thought up by building lots of small, simple
programs and encouraging them to interact." (57)
Lenat is critical of this type of AI approach. He accuses Brooks of
wandering aimlessly trying to recreate evolution. Lenat has created CYC. An AI
program which uses the top down theory which states that "if you can write down
the logical structures through which we comprehend the world, you're halfway to
re-creating intelligence. (Time, 57) Lenat is feeding CYC common sense
statements (i.e. "Bread is food.") with the hopes that it will make that leap to
making its own logical deductions. Indeed, CYC can already pick a picture of a
father watching his daughter learn to walk when prompted for pictures of happy
people. Brooks has his own criticisms for Lenat. "Without sensory input, the
program's knowledge can never really amount to more than an abstract network
So, what's the answer? The evidence points to the position that AI is
possible. What is our brain but a complicated network of neurons? And what is
thought but response to stimuli? How to go about achieving AI is another
question entirely. All avenues should be explored. Someone is bound to hit on it.