Thursday, October 18, 2007

Simple Common Sense

Article:

Artificial Intelligence Is Lost in the Woods

A conscious mind will never be built out of software, argues a Yale University professor.

excerpt:

The ideas of the philosopher Jerry Fodor make him neither strictly cognitivist nor anticognitivist. In The Mind Doesn't Work That Way (2000), he discusses what he calls the "New Synthesis"--a broadly accepted view of the mind that places AI and cognitivism against a biological and Darwinian backdrop. "The key idea of New Synthesis psychology," writes Fodor, "is that cognitive processes are computational. ... A computation, according to this understanding, is a formal operation on syntactically structured representations." That is, thought processes depend on the form, not the meaning, of the items they work on.

In other words, the mind is like a factory machine in a 1940s cartoon, which might grab a metal plate and drill two holes in it, flip it over and drill three more, flip it sideways and glue on a label, spin it around five times, and shoot it onto a stack. The machine doesn't "know" what it's doing. Neither does the mind.

Likewise computers. A computer can add numbers but has no idea what "add" means, what a "number" is, or what "arithmetic" is for. Its actions are based on shapes, not meanings. According to the New Synthesis, writes Fodor, "the mind is a computer."

But if so, then a computer can be a mind, can be a conscious mind--if we supply the right software. Here's where the trouble starts. Consciousness is necessarily subjective: you alone are aware of the sights, sounds, feels, smells, and tastes that flash past "inside your head." This subjectivity of mind has an important consequence: there is no objective way to tell whether some entity is conscious. We can only guess, not test.

Granted, we know our fellow humans are conscious; but how? Not by testing them! You know the person next to you is conscious because he is human. You're human, and you're conscious--which moreover seems fundamental to your humanness. Since your neighbor is also human, he must be conscious too.

So how will we know whether a computer running fancy AI software is conscious? Only by trying to imagine what it's like to be that computer; we must try to see inside its head.

Which is clearly impossible. For one thing, it doesn't have a head. But a thought experiment may give us a useful way to address the problem. The "Chinese Room" argument, proposed in 1980 by John Searle, a philosophy professor at the University of California, Berkeley, is intended to show that no computer running software could possibly manifest understanding or be conscious. It has been controversial since it first appeared. I believe that Searle's argument is absolutely right--though more elaborate and oblique than necessary.

Searle asks us to imagine a program that can pass a Chinese Turing test--and is accordingly fluent in Chinese. Now, someone who knows English but no Chinese, such as Searle himself, is shut up in a room. He takes the Chinese-understanding software with him; he can execute it by hand, if he likes.

Imagine "conversing" with this room by sliding questions under the door; the room returns written answers. It seems equally fluent in English and Chinese. But actually, there is no understanding of Chinese inside the room. Searle handles English questions by relying on his knowledge of English, but to deal with Chinese, he executes an elaborate set of simple instructions mechanically. We conclude that to behave as if you understand Chinese doesn't mean you do.

But we don't need complex thought experiments to conclude that a conscious computer is ridiculously unlikely. We just need to tackle this question: What is it like to be a computer running a complex AI program?

Well, what does a computer do? It executes "machine instructions"--low-level operations like arithmetic (add two numbers), comparisons (which number is larger?), "branches" (if an addition yields zero, continue at instruction 200), data movement (transfer a number from one place to another in memory), and so on. Everything computers accomplish is built out of these primitive instructions.

So what is it like to be a computer running a complex AI program? Exactly like being a computer running any other kind of program.

Computers don't know or care what instructions they are executing. They deal with outward forms, not meanings. Switching applications changes the output, but those changes have meaning only to humans. Consciousness, however, doesn't depend on how anyone else interprets your actions; it depends on what you yourself are aware of. And the computer is merely a machine doing what it's supposed to do--like a clock ticking, an electric motor spinning, an oven baking. The oven doesn't care what it's baking, or the computer what it's computing.

The computer's routine never varies: grab an instruction from memory and execute it; repeat until something makes you stop.

Of course, we can't know literally what it's like to be a computer executing a long sequence of instructions. But we know what it's like to be a human doing the same. Imagine holding a deck of cards. You sort the deck; then you shuffle it and sort it again. Repeat the procedure, ad infinitum. You are doing comparisons (which card comes first?), data movement (slip one card in front of another), and so on. To know what it's like to be a computer running a sophisticated AI application, sit down and sort cards all afternoon. That's what it's like.

If you sort cards long enough and fast enough, will a brand-new conscious mind (somehow) be created? This is, in effect, what cognitivists believe. They say that when a computer executes the right combination of primitive instructions in the right way, a new conscious mind will emerge. So when a person executes the right combination of primitive instructions in the right way, a new conscious mind should (also) emerge; there's no operation a computer can do that a person can't.

Of course, humans are radically slower than computers. Cognitivists argue that sure, you know what executing low-level instructions slowly is like; but only when you do them very fast is it possible to create a new conscious mind. Sometimes, a radical change in execution speed does change the qualitative outcome. (When you look at a movie frame by frame, no illusion of motion results. View the frames in rapid succession, and the outcome is different.) Yet it seems arbitrary to the point of absurdity to insist that doing many primitive operations very fast could produce consciousness. Why should it? Why would it? How could it? What makes such a prediction even remotely plausible?


As I've written before:

Consciousness is the one thing a computer does not need to operate correctly. So how does proclaiming that my brain is a neural computer explain my consciousness?

No comments: