2

Post singularity AI will surpass human intelligence. The evolution of AI can take any direction, some of which may not be preferable for humans. Is it possible to manage the evolution of super-intelligent AI? If yes, how? One way I can think of is following. Instead of having a mobile AI like humanoid, we can keep it immobile, like a box, like current super computers. It can be used to solve problems of maths, theoretical science etc.

akm
  • 171
  • 4

4 Answers4

2

Assuming super-intelligence is possible, the answer is probably yes and no.

Yes in Kurzweil-like scenarios, where super-intelligence is an extension of human beings by technology (we are already in to some extent). Then control follows, as super-intelligence depends on us. It would extend our capabilities, such as speed of processing, extent of processing, etc. Even then control is debatable, as a remote-controlled killing machine would be part of a super-intelligent organism, partially human "controlled", partially autonomous.

No in "Future of Life Institute"-like scenarios, where super-intelligence is independent from humans. The thinking is simple: What can we hope to do facing someone way more intelligent? The usual parallel is to compare this scenario with the arrival of the "developed" conquistadors in early America. Gunpowder vs. mere raw strength and arrows.

Eric Platon
  • 1,510
  • 10
  • 22
1

Competition always gives better result. If machines will try to improve themselves, we as human beings will definitely try to improve ourself.

Rishi Raj
  • 11
  • 2
0

Without going into more detail at the moment (b/c I'm time strapped), I strongly urge you to research the Control Problem.

My own personal view is that humans are more problematic than machines. Machines are at least rational.

To be more specific, I believe human "management" (read as "mis-management") of powerful AI is potentially more of a problem than super-intelligent AI left to it's own devices.

Humans are known to abuse power, and history is filled with such examples. Machines, at least, have a clean slate in this regard.

DukeZhou
  • 6,209
  • 5
  • 27
  • 54
0

Yes, it is possible.

When humans were working on the first nuclear bomb, some field experts of the time thought that when the reaction went super-critical, it would not stop, and would devour the earth. It was a plausible possibility given our understand of nuclear energy at the time, and we didn't know for sure until we did it.

Some scientists synthesize black-hole like environments in laboratories. Some experts think that if a certain point is accidentally crossed due to ignorance or negligence, we may devour our planet with a self made black hole.

The situation is the same with AI. Until we actually create a super-intelligent AI, we cannot say with certainty whether it will be controlled or controllable until it happens. Until that time comes the answer to your question is yes, it's possible, but that does not mean it will or will not happen that way.

Nomadyn
  • 49
  • 2