1

Most people seem to assume that we need a human-level AI as a starting point for the Singularity.

Let's say someone invents a general intelligence that is not quite on the scale of a human brain, but comparable to a rat. This AI can think on its own, learn and solve a wide range of problems, and basically demonstrates rat-level cognitive behavior. It's just not as smart as a human.

Is this enough to kickstart the exponential intelligence explosion that is the Singularity?

In other words, do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?

Anonymous
  • 69
  • 1
  • This is an interesting question, but it's difficult to answer it without just giving an opinion. So, what kind of answer are you expecting? – nbro Jun 22 '21 at 23:37

2 Answers2

1

do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?

The requirement from "theory" of the singularity is that:

  • The AI is able to design and implement a better AI than itself.

  • The trait of being able to design better than itself continues to apply in each iteration.

If both these things hold, then each generation of AIs will continue to improve. This is often assumed by singularity pundits to be an exponential growth curve e.g. each iteration makes +50% compound improvement on whatever measure is being made of intelligence. (Aside: Personally I find this a major weakness in the argument for the singularity being meaningfully possible, that it assumes this growth)

The first item of these two is important to your question. For the singularity to work, there is a baseline capability required - the AI needs to be able to design and build other AIs. A general intelligence at the level of an animal - at least any animal intelligence that we are aware of - does not seem capable of this task. It is not clear that humans are even capable of this task when the AI being built has to possess at least some general intelligence.

The term "animal-level intelligence" is tricky. The narrow AIs that we currently build can outperform animals and humans on specific tasks, but in terms of general intelligence they do not score highly (or at all). If we could build one that can outperform humans on a "building an AI" task, it might still have the general intelligence of an animal whilst having the capability to bootstrap an iterative process of self-improvement. This does seem like a very dangerous experiment to try though, with idiot-savant-deity AI and paperclip maximiser scenarios as possible outcomes because the AI's general intelligence lags behind its raw capabilities.

Neil Slater
  • 33,739
  • 3
  • 47
  • 66
0

Consider scientists creating a worm-level AI (which supposedly has happened, scientists have fully simulated a worm's brain, as far as I am aware); Now what? Is that simulation of 302 neurons going to rapidly explode and take over the world? Of course not! You need more than just the baseline intelligence, you require an AI not only with the intelligence to learn, but the infrastructure/capacity to advance to a point where it can then create more infrastructure/capacity for itself. That is when it will be able to explode in a singularity kind of event. This is all speculation of course, so take it as you will.

nbro
  • 42,615
  • 12
  • 119
  • 217