2

Many experts seem to think that artificial general intelligence, or AGI, (on the level of humans) is possible and likely to emerge in the near-ish future. Some make the further step to say that superintelligence (much above the level of AGI) will appear soon after, through mechanisms like recursive self-improvement from AGI (from a survey).

However, other sources say that such superintelligence is unlikely or impossible (example, example).

What assumptions do those who believe in superintelligence make? The emergence of superintelligence has generally been regarded as something low-probability but possible (e.g. here). However, I can't seem to find an in-depth analysis of what assumptions are made when positing the emergence of superintelligence. What specific assumptions do those who believe in the emergence of superintelligence make that are unlikely, and what have those who believe in the guaranteed emergence of superintelligence gotten wrong?

If the emergence of superintelligence is to be seen as a low-probability event in the future (on par with asteroid strikes, etc.), which seems to be the dominant view and is the most plausible, what assumptions exactly makes it low-probability?

user35673
  • 23
  • 3

1 Answers1

0

I am not an expert on the topic, but I will provide some information that could be useful or helpful.

I think that the first and maybe trivial assumption that people make when they say that AGI or SI can emerge is that general intelligence (whatever the definition is) is computable, i.e. there's a Turing machine or, in general, a mathematical model of computation that can simulate an algorithm that possibly represents general intelligence. I don't know how likely the mind of a human (or any other general intelligence) is computable (or to what extent), but there's the so-called computational theory of mind (CTM) that goes into this direction.

An assumption related to the emergence of SI is that recursive self-improvement is physically possible. I also don't know exactly how likely or unlikely recursive self-improvement is, but my limited knowledge of thermodynamics (and physics) suggests that it will not be possible (at least, at the pace or the way people claim or want to suggest). I am not saying that we will not see more progress in the next years (we will), but this doesn't mean that we will be able to create an AGI. People often overestimate themselves and their intelligence.

A third assumption is related to the current achievements in the artificial intelligence field. Many people claim that there has been a lot of progress in recent years and that there's no reason to believe that this will not continue. However, history tells us that the predictions of AI scientists about the future of the AI field were often wrong. There have been at least 2 AI winters because of these wrong predictions and unmet expectations.

Many people also don't exclude the possibility of the emergence of AGI or SI only because they have not yet been proven wrong.

Another assumption is that there won't be any catastrophe (like COVID-19 can potentially be) that will significantly slow down the general scientific progress.

nbro
  • 42,615
  • 12
  • 119
  • 217