8

Some argue that humans are somewhere along the middle of the intelligence spectrum, some say that we are only at the very beginning of the spectrum and there's so much more potential ahead.

Is there a limit to the increase of intelligence? Could it be possible for a general intelligence to progress infinitely, provided enough resources and armed with the best self-recursive improvement algorithms?

nbro
  • 42,615
  • 12
  • 119
  • 217
Pre-alpha
  • 91
  • 1
  • 4

5 Answers5

3

To have a "maximum achievable intelligence", first of course you have to define "intelligence" well enough to be able to rank things by intelligence. There is no widely-supported theory that is able to do so.

You might like to look into AIXI as described by Marcus Hutter in a video lecture. It is an attempt to formalise intelligent agents that attempt to make optimal decisions mathematically. Here is another written introduction. Of course this is only one of many possible frameworks to describe intelligent agents.

One interesting implication is that AIXI implies intelligence - in terms of ability to learn from and exploit an environment - is upper bounded. In principle there is a ceiling due to uncertainty about what the data that a rational agent possesses might infer.

However, this ceiling refers to only specific abilities to extract actionable information from data that the agent has access to in order to solve decision problems. There is an open question about how much data can be collected, stored and processed by any entity, and this ability to acquire and retrieve relevant knowledge would be viewed by many as part of the "intelligence score" when comparing agents.

There are theoretical limits to computation from physics, e.g. some are based on the fact that it fundamentally requires energy, the energy has a mass equivalence, and enough concentrated mass would form a black hole. This sets a high upper bound, and it is likely that real-world structural and design issues will set in way before this limit. However, combined with the above limits on decidability and practical access to data, it does seem there should be a ceiling.

Neil Slater
  • 33,739
  • 3
  • 47
  • 66
3

Absolutely, regardless of how you define "intelligence".

  • If intelligence is merely information, as in "a piece of intelligence", as in data, or an algorithm, the structure is finite. (Structure, here, refers to the information, which may be reduced to a single string in either case.)

See: Turning Machine.

  • If intelligence is the rational capability of an automata, it is likewise bounded by the tractability of the decision problem, the structure of the algorithm, and the time available to make the decision.

See: Bounded Rationality

Both answers are really the same, because "intelligence" in the first sense is limited by physical constraints on information density, sophistication of the algorithm, and time.

See also: Computational complexity of mathematical operations, Computational complexity, Time Complexity

DukeZhou
  • 6,209
  • 5
  • 27
  • 54
2

The answers previously given are correct for AI which can indeed process more information with more computational power. However, actual reasoning ability like humans have is not defined by Church-Turing. AIXI has nothing to do with human reasoning. A pretty good clue to this fact is that AIXI has been around since 2005 and to date there are no machines based on it that have human-level reasoning. For example, an interesting topic in AI is natural language processing (NLP). I can speak into my Android phone and it will transcribe my speech into text. It seems like an amazing advance. However, this is what a human would do if they heard a foreign language and then did a phonetic transcription of what they heard. Then they looked up a phonetic chart to match the sounds with words. This would take place without any actual understanding of what they were hearing. This is how it works on my phone, much like Searle's Chinese Room.

Humans are quite different because they actually understand words. The equivalent to this in AI would be natural language understanding (NLU). No AI today has NLU and no theory within AI explains how to construct it. There isn't any research on AI NLU because there is no starting point. A fact that most AI enthusiasts don't like to admit is that even the smartest AI systems are routinely outclassed by rats and even six month old babies in terms of comprehension. AI systems have no comprehension or understanding and without this they have no actual reasoning ability. Human-level comprehension falls under a completely different theory from the computational derivatives of Church-Turing.

Can you make a human-level machine agent smarter by giving it more computational power? No, because you'll run into all sorts of problems which would take a few book chapters to explain. There are enhancements you can make but these have limits. If you go by a standard deviation 15 chart for IQ like Wechsler or the 5th edition of Stanford-Binet, the chances of having an IQ of 195 is 1 out of 8 billion. So, this roughly sets the upper bound of human ability. We could probably see machine agents with an IQ of 240 but not 500 or 1,000. I do understand the confusion concerning computation since exhaustive routines in AI are time limited. For example, our dim-witted chess programs play by laborious trial and error. They don't actually get smarter with more computational power, they are just able to eliminate bad moves faster. Let me give a human example. Let's say that I could do 5 math problems of a given complexity per hour using pencil and paper. So, I add a slide rule and my rate changes to 10 problems per hour. Then I switch to a calculator and my rate increases to 20. Let's say I then start using a spreadsheet and I hit 30 per hour. I am not actually 6x smarter than I was when I used pencil and paper.

So, to answer the question, it is not possible to continuously increase intelligence even with unlimited computational power. However, it should be possible for machine intelligence to exceed human intelligence. One final thing that I should mention though is that this type of theory is quite good at organizing knowledge in a way that current big data methods do not. So, it is probable that the same theory that would allow a machine IQ of 240 would also provide enough assistance to a human to function at the same level.

scientious
  • 291
  • 1
  • 4
0

It depends on the specific definition of intelligence, but with a definition based on proficiency on a set of arbitrary tasks the answer is yes. For instance, if we take intelligence as multiplicative function

$$ I = f(Accuracy) * g(Resource \, Consumption) * h(Generalisation \, Abilities) $$

there is an inherent limit given by the maximum values obtained by the individual functions.

  • Accuracy on the tasks is limited by the maximum obtainable score on those tasks.
  • Minimal resource consumption is limited by physics and information theory (entropy of the tasks, limits of computation, quantity of matter-energy in the universe)
  • Generalisation abilities are capped at intelligences able to solve arbitrary tasks.

Moreover this limit is not simply the product of the max of the individual functions, since there are inherent tradeoffs (to increase accuracy more resources are needed, etc.)

The definition of intelligence above is inspired by the Pragmatic General Intelligence Index.

Nevertheless, if in our definition of intelligence we disregard resource consumption, our universe is infinite and we allow for tasks with arbitrary high scores, then there is no upper bound on intelligence.

Rexcirus
  • 1,309
  • 9
  • 22
0

Actually, these aspects is part of some books I am working on right now. Like what Jeevan say it is bound by laws of physics. I see that too, but in case of the Human, in the way i look at it, we are at the very lowest step/beginning of an AI that can do self reflecting and question its own ability to do rational thinking and how far it can go and develop.

And I also use the Black Hole - maximum density, maximum energy concentration as a 1'st upper limit. A intelligence who can operate at Plank Level, can store all information in this world in like a few sand corns size. So given enough density, optimization in information exchange/methods, the upper limit is far far away from what humans process.

But I take the analyze further and go beyond the Black hole theory. Of course it is speculation, but we still don't know all. So what is possible in more than 4 dimension (human brain, 3 dimension and time to think).

As I see it, in real world, upper limit is so far away from our understanding, that we still can't handle it.