13

Back in college, I had a Complexity Theory teacher who stated that artificial intelligence was a contradiction in terms. If it could be calculated mechanically, he argued, it wasn't intelligence, it was math.

This seems to be a variant of the Chinese Room argument. This argument is a metaphor, where a person is put in a room full of Chinese books. This person doesn't understand a word of Chinese but is slipped messages in Chinese under the door. The person has to use the books, which contain transformation rules, to answer these messages. The person can apply the transformation rules but does not understand what (s)he is communicating.

Does the Chinese room argument hold? Can we argue that artificial intelligence is merely clever algorithmics?

nbro
  • 42,615
  • 12
  • 119
  • 217

5 Answers5

9

It depends on the definition of (artificial) intelligence.

The position that Searle originally tried to refute with the Chinese room experiment was the so-called position of strong AI: An appropriately programmed computer would have a mind in the exact same sense as humans have minds.

Alan Turing tried to give a definition of artificial intelligence with the Turing Test, stating that a machine is intelligent if it can pass the test. The Turing Test is introduced here. I won't explain it in detail because it is not really relevant to the answer. If you define (artificial) intelligence as Turing did, then the Chinese room experiment is not valid.

So the point of the Chinese room experiment is to show that an appropriately programmed computer is not the same as a human mind, and therefore that Turing's Test is not a good one.

nbro
  • 42,615
  • 12
  • 119
  • 217
wythagoras
  • 1,521
  • 12
  • 28
8

There are two broad types of responses to philosophical queries like this.

The first is to make analogies and refer to intuition; one could, for example, actually calculate the necessary size for such a Chinese room, and suggest that it exists outside the realm of intuition and thus any analogies using it are suspect.

The second is to try to define the terms more precisely. If by "intelligence" we mean not "the magic thing that humans do" but "information processing," then we can say "yes, obviously the Chinese Room involves successful information processing."

I tend to prefer the second because it forces conversations towards observable outcomes, and puts the difficulty of defining a term like "intelligence" on the person who wants to make claims about it. If "understanding" is allowed to have an amorphous definition, then any system could be said to have or not have understanding. But if "understand" is itself understood in terms of observable behavior, then it becomes increasingly difficult to construct an example of a system that "is not intelligent" and yet shares all the observable consequences of intelligence.

Matthew Gray
  • 4,272
  • 19
  • 28
6

First of all, for a detailed view of the argument, check out the SEP entry on the Chinese Room.

I consider the CRA as an indicator of you definition of intelligence. If the argument holds, yes, the person in the room understands Chinese. However, let's sum up the three replies discussed in the SEP entry:

  1. The man himself doesn't understand Chinese (he wouldn't be able to understand it when outside the room), but the system man+room understands it. Accepting that reply suggests that there can exist an intelligent system which parts aren't themselves intelligent (which can be argued of the human body itself).

  2. The system doesn't understand Chinese, as it cannot interact with the world in the same way a robot or a human could (i.e. it cannot learn, is limited in the set of questions it can answer)

  3. The system doesn't understand Chinese (depending on your definition of understanding), and you couldn't say a human performing the same feats as the Chinese room understands Chinese either.

So whether the argument, or a variant of it holds, depends on your definitions of intelligent, understanding, on how you define the system, etc. The point being that the thought experiment is a nice way to differentiate between the definitions (and many, many debates have been held about them), in order to avoid talking past each other endlessly.

jrmyp
  • 556
  • 4
  • 4
3

Depends on who you ask! John Searle, who proposed this argument, would say "yes", but others would say it is irrelevant. The Turing Test does not stipulate that a machine must actually "understand" what it is doing, as long as it seems that way to a human. You could argue that our "thinking" is only a more sophisticated form of clever algorithmics.

wythagoras
  • 1,521
  • 12
  • 28
David Vogel
  • 191
  • 5
0

What excellent responses! When Searle published his paper, I was still in college, so my own understanding was limited. ...And I attempted to resolve "my version" of what he was trying to say. It was a hugely rewarding effort. Since then a lot has happened. In one world, I simply dropped my pursuit of working through the CR problem, and went on to other problems. Still, as recently as March 2023, I received an email from my brother-in-law with a link to Jeffrey Kaplan's "most famous thought experiment..." video. Reluctantly, I watched. If you haven't seen this <30 minute lesson, you really owe it to yourself. Here's the link: https://www.youtube.com/watch?v=tBE06SdgzwM Since then, I've realized that I wasn't the only one who didn't understand what Searle was talking about. Searle himself, though a profound and astute philosopher, did not understand programming enough to weigh in on AI.

Try this: write some code.
(I'll show you my answer)
I wrote a loop in C
int i;
for (i=0;i<10;i++) {
  printf("%f\n",3.1415);
}
Now answer, "why did you <placeholder>?"
Why did you write a loop?
Why only 10 iterations?
Why not more? Less?
Why print 3.1415?  You could have done anything in that space.
It doesn't matter what the questions are (assuming they relate, in spirit).
It doesn't matter what the answers are.

All substance, no matter how trivial, is not syntax; it is semantics. Almost every line. Of every program. Ever written. Is already loaded with semantics. To limit computer programs to being syntax-only is a world-class oversight. I accepted the premise. Lots of people did. The Chinese Room is built on this casually-delivered error. So if you found yourself disagreeing with Searle's premise but not really being clear as to why...

Searle paints an arid ghost town, devoid of semantics, and challenges us to spot the spirit of AI on his canvas. No wonder there was an AI winter! Turn away from that erroneous artwork, and see the colors and hear the sounds of the real world. The spirit of AI is in that rich world.

also, when's the last time you wrote a program that was just a big case statement?