1

Here intelligence is defined as any analytic or decision making process, regardless of strength (utility), and, potentially, any process of computation that produces output, regardless of the medium.

The idea that AI is part of an evolutionary process, with humans as merely a vehicle of the next dominant species, has been a staple of Science Fiction for many decades. It informs our most persistent mythologies related to AI. (Recent examples include Terminator/Skynet, Westworld, Ex Machina, and Alien:Covenant).

What I'm driving at here is that, although the concept of Neural Networks have been around since the 1940's, they have only recently demonstrated strong utility, so it's not an unreasonable assumption to identify Moore's Law as the limiting factor. (i.e. it is only recently that we have had sufficient processing power and memory to achieve this utility.)

But the idea of AI is ingrained into information technology once automation becomes possible. Babbage's Difference Engine led to the idea of an Analytic Engine, and the game of Tic-Tac-Toe was proposed as a means of demonstrating intelligence.

What I'm driving at here is that the idea of analysis and decision making are so fundamental in regard to information technology, that it is difficult to see functions that don't involve them. And, if the strength of analysis and decision making is largely a function of computing power:

Can intelligence be understood as a naturally occurring function of information technology?

nbro
  • 42,615
  • 12
  • 119
  • 217
DukeZhou
  • 6,209
  • 5
  • 27
  • 54

1 Answers1

1

Maybe, but it depends to a very large degree on the choice of definition.

One of the biggest challenges for AI researchers, neuroscientists, philosophers, and psychologists, has been that the layperson's understanding of intelligence does not appear to correspond to a well-defined concept. This point was most famously exploited by John R. Searle in his paper Minds Brains and Programs.

Consider your definition again carefully, and notice that decision is unbound. Does a rock rolling down a hill make a decision when it 'decides' to roll to the left? If not, Searle and his ilk would argue that a computer isn't making a decision when it decides to execute the next instruction. It is simply obeying deterministic laws of physics.

To escape this problem and still claim that machines are intelligent, you can either chose to believe that rocks make decisions (a view called panpsychism), or that we should behave as though rocks make decisions because doing so gives us a lot of explanatory power (the intentional stance), or claim that humans are no different from the rock or the computer (eliminative materialism). The last option leaves you with the problem of explaining subjective experiences though.

An excellent book that covers the mainstream arguments in this vein is Haugeland's Mind Design II. All the arguments I've outlined here are covered in great detail within. Among modern AI researchers, I'd say there is a split between those who embrace something like the intentional stance and those who embrace eliminative materialism.

John Doucette
  • 9,452
  • 1
  • 19
  • 52