3

AI experts like Ben Goertzel and Ray Kurzweil say that AGI will be developed in the coming decade. Are they credible?

DukeZhou
  • 6,209
  • 5
  • 27
  • 54
Siddhartha
  • 413
  • 2
  • 11

3 Answers3

5

As a riff on my answer to this question, which is about the broader concern of the development of the singularity, rather than the narrower concern of the development of AGI:

I can say that among AI researchers I interact with, it far more common to view the development of AGI in the next decade as speculation (or even wild speculation) than as settled fact.

This is borne out by surveys of AI researchers, with 80% thinking "The earliest that machines will be able to simulate learning and every other aspect of human intelligence" is in "more than 50 years" or "never", and just a few percent thinking that such forms of AI are "near". It's possible to quibble over what exactly is meant by AGI, but it seems likely that for us to reach AGI, we'd need to simulate human-level intelligence in at least most of its aspects. The fact that AI researchers think this is very far off suggests that they also think AGI is not right around the corner.

I suspect that the reasons AI researchers are less optimistic about AGI than Kurzweil or others in tech (but not in AI), are rooted in the fact that we still don't have a good understanding of what human intelligence is. It's difficult to simulate something that we can't pin down. Another factor is that most AI researchers have been working in AI for a long time. There are countless past proposals for AGI frameworks, and all of them have been not just wrong, but in the end, more or less hopelessly wrong. I think this creates an innate skepticism of AGI, which may perhaps be unfair. Nonetheless, expert opinion on this one is pretty well settled: no AGI this decade, and maybe not ever!

John Doucette
  • 9,452
  • 1
  • 19
  • 52
2

I wouldn't take anything Ray Kurzweil says especially seriously. Actual AI experts spend large quantities of time reading the existing scientific literature, and working to expand it. Because Kurzweil doesn't spend much of his time actually learning about AI, he has plenty of time in which to talk about it. Loudly. This is harmful to research, because 1) a lot of the uninformed predictions he and others make resemble doomsday scenarios, and 2) the predictions of good things have insanely optimistic time frames attached, and when they don't come true, research funding may be lost because AI hasn't lived up to what people thought it promised.

AI research has been progressing very rapidly in the last decade, but if we're being honest, a lot of the credit for that has to go to the people who develop research-grade graphics cards. The ability to perform massive amounts of linear algebra in parallel has allowed us to use techniques that we've known about for a couple decades, but that were too computationally expensive to be practical at the time. And because those techniques are now practical, a lot of current research is applying those techniques to new problems, and modifying and improving them based on what we've learned. (I don't want to understate the contributions here; there have been a lot of really clever ideas developed in the last ten years. But it's mostly consistent iterative improvement of techniques that already existed, rather than completely revolutionary ideas.)

To make human-equivalent AIs, we'll probably need to make a few of those giant conceptual leaps. And each of those leaps will then need to be followed up by a decade or two of iterative improvement, because that's how the process works. Case in point, the revolutionary idea that eventually led to all the Deep Learning models out there today was this one, dated 1986. First, there was the revolutionary idea. It was followed up by a bunch of work that built on it and expanded it in new directions. The work eventually stagnated because of hardware constraints. Then hardware scientists and engineers made some advances that let us continue work, and only then did we finally start getting the major applications that we're seeing today.

We know human-level intelligence is possible, since humans manage it. I have little doubt that we'll figure out how to do it with AI eventually (maybe in my lifetime, maybe not). But if you want Kurzweil's predictions to be even remotely plausible, you might want to add a zero to the end of most of his time frames.

Ray
  • 344
  • 4
  • 12
0

My simple answer is NO.

Let me elaborate. If you closely observe nature, you see that nothing changes drastically all of a sudden. Even when it does, it doesn't stay for long.

Field of AI, has just started and it needs a lot more evolution to achieve AGI. Though AI is solving many directed problems like Face Recognition, Speech Recognition and many more (applications are innumerable), all these can be considered as Narrow AI. They solve a particular task. For AI reach to the state where it can better than humans in all aspects, not only do we need breakthroughs in algorithms, we also need many more breakthroughs in electronics and physics.

Please read this article. Summary is experts(around 350) estimate that there’s a 50% chance that AGI will occur until 2060. So, there is a very bleak chance that AGI will become a reality in next decade.

https://blog.aimultiple.com/artificial-general-intelligence-singularity-timing/

Kartik Podugu
  • 289
  • 3
  • 7