5

Is there empirical evidence that some approaches to achieving AGI will definitely not work? For the purposes of the question the system should at least be able to learn and solve novel problems.

Some possible approaches:

  1. A Prolog program
  2. A program in a traditional procedural language such as C++ that doesn't directly modify its own code
  3. A program that evolves genetically in response to selection pressures in some constructed artificial environment
  4. An artificial neural net
  5. A program that stores its internal knowledge only in the form of a natural human language such as English, French, etc (which might give it desirable properties for introspection)
  6. A program that stores its internal knowledge only in the form of a symbolic language which can be processed unambiguously by logical rules
persiflage
  • 153
  • 4

1 Answers1

1

Very interesting question. Assuming that the programming languages used are powerful enough (say Turing Complete), all of the above actually should lead to an AGI. The difference is in how efficiently they can do it, both in term of number of computations required and in the length of the program.

So the question could be rephrased as: which approach cannot lead to an AGI using less than X resources and being shorter than Y characters? The second question is basically asking the Kolmogorov complexity of the AGI in that language, which is uncomputable. Since we cannot find the shortest program, I don't think we can make conclusions about the maximum program efficiency either. In summary I don't see a way to rule out any of those approaches (but I would be very happy to be proven wrong).

Rexcirus
  • 1,309
  • 9
  • 22