0

Could anyone explain this problem I have with the Turing test as performed? Turing (1950) in describing the test says the computer takes the part of the man then plays the game as when played between a man and a woman. In the game, the man and the woman communicate with the hidden judge by text alone (Turing recommends using teleprinters). If the computer takes the part of the man, then it will have an eye and a finger in order to use the teleprinter as the man would have done. But in the TT as performed, the machine is not robotic. It has no eyes and no fingers but rather is wired directly into the judge's terminal. The only thing the machine gets from the judge is what flows down the wire. But the problem I have is, what flows down the wire is not text. The human contestant gets the text. The judge's questions print out on the teleprinter paper roll. The man sees the shapes of the text, and understands the meanings of the shapes. But the computer is never exposed to the shapes of the questions, so how could it possibly know what they mean?

I've never seen anyone raise this problem, so I'm very confused. How could the machine possibly know the judge's questions if it is never exposed to the shapes of the text?

Roddus
  • 161
  • 6

2 Answers2

1

(As @nbro writes, your question is not very specific; I'm answering here how I understand it from the current version)

In an ideal world, a computer would see written text (via a camera), scan it, understand it, and type a response. I assume Turing didn't go for voice transmission, as voice includes other clues to a person's gender.

However, AI is such a complex field that it would have been impractical to implement this until fairly recently. And OCR and robotic movements (typing on a keyboard) are arguably not that relevant to human cognition, so in most actually run Turing-like tests shortcuts are taken.

Update: Also, note that the original Turing test (1950) was based on a party game about distinguishing between a man and a woman (who were not visible). This imitation game was later generalised to a guessing game between a human and a machine.

Oliver Mason
  • 5,477
  • 14
  • 32
0

There is no form of OCR that assigns "meaning" by processing visual input of letters and words into the computer representation of those same words (e.g. ASCII). The robot with a camera and keyboard does not solve the problem you have raised. You need to look elsewhere for answers, and the state of AI today is that no-one has strong evidence for how meaning arises within an intelligent system. There is plenty of writing on the subject though.

I think you are trying to understand how and where meaning may arise in any system (biological or machine). There is lots of thought around this subject in AI philosophy and research. A good place to start might be with John Searle's Chinese Room argument, which broadly agrees with you that a basic discussion/chatbot program does not prove intelligence, but for different reasons than the "shapes of the text", which is not really an issue at all in my opinion. Searle's argument is by no means the end of the matter, and plenty has been written in rebuttal and support of the argument.

The real issue is the symbol grounding problem, which also applies to visuals of text, or any system of referring one entity by using another entirely different one.

Potentially addressing the grounding problem, there are various philosophies and engineering ideas proposed. These include:

  • Behaviouralism. It is not important what goes on inside an intelligent system, only that its external measurable behaviours are those of an intelligent system. This matches quite closely to the idea of the Turing Test, but many people find this unsatisfying due to personal experience of self-awareness, subjective experience and consciousness. It is in some ways the "shut up and calculate" of AI.

  • Embodiment and multi-modal experience. If an agent can experience the world directly and associate symbols with relevant experiences (the word "cat" with seeing and hearing cats), then it would be intelligent in the same way as we are.

  • Missing components. Humans (and sometimes animals) possess some additional system that cannot be replicated by current computing and robotic devices, even if they were made 1000s of times more powerful. The missing component might be something quantum in our cells (Penrose, The Emperor's New Mind) or "the soul". This is also a common depiction of robots and AI in science fiction, and there is lots of popular support for it as a philosophy, despite weak evidence.

  • Complexity and power. We can currently replicate the mental power of small insects on computing devices. When we scale up with more powerful computers, larger neural networks, and perhaps a bit of special extra structure (that we don't know yet), then we will hit a level of complexity where true intelligence will emerge. You could view recent very large language models such as GPT-3 as exploring this idea.

Neil Slater
  • 33,739
  • 3
  • 47
  • 66