0

I've read Penrose's Emperor's New Mind (ENM, 1989), and I am now half-way through Shadows of the Mind (SM, 1994). I read ENM in 2021, and I found his argument compelling in its generality and clarity of reasoning. For those unfamiliar, he argues that some parts of human consciousness and awareness must be non-computational, and therefore no algorithm could ever achieve awareness. His argument is based on Gödel's incompleteness theorem, or equivalently, the Turing stopping problem. This is directly against the "strong AI" stance, that human consciousness is just a highly complex computation, that could, in principle, be implemented on a Turing machine. He further speculates that the non-computational nature of human consciousness might be related to quantum-mechanical action in the brain. See here for a more detailed explanation of this argument.

ChatGPT and other LLMs were released between my reading of ENM and SM, and now I am feeling more skeptical. The inferencing process of these neural networks is perfectly computable. This is not to say that LLMs have achieved, or are even capable of achieving perfectly sound reasoning abilities sufficient to correctly ascertain truth in every instance. They do however seem to have some higher order abilities. For example, you could ask it whether certain algorithms stop, and without that specific algorithm ever appearing in its training data, it could determine that it does or does not, and tell you why. Given that these tools are only in their infancy, it is at least conceivable that they could achieve truth-evaluating abilities on par with humans. This in my view, might weaken Penrose's argument, as it provides an algorithmic procedure which can ascertain truth, at least as often as human's can.

Penrose's argument rests heavily on identifying specific cases for which human reasoning can ascertain the truth of a statement, which a Turing machine is provably incapable of doing. Therefore, he argues, the action of human consciousness must lie outside of the space of all possible Turing machines. In light of the surprising abilities of LLMs, Penrose is giving the human mind too much credit?

I have scoured the web to see if anyone has asked Penrose about this, and the only thing I can find is in his interview with Jordan Peterson, where Peterson asks about image recognition algorithms, which Penrose dismissed as unrelated to his argument. I'd love to hear more thoughts on this.

lcleary
  • 1
  • 1

0 Answers0