These two ideas operate at very different levels. Gödel’s Incompleteness Theorem tells us that any sufficiently rich foundational formal deductive system capable of basic arithmetic will always have true statements that cannot be proven within it if it's actually consistent. On the other hand, modern AI systems including geometric deep learning (GDL) models, aren’t built by explicitly constructing complete sets of axioms to prove theorems symbolically, but by learning and inferencing from data after finite computational steps. Thus current AI applications as mainly trained ML models sidestep the proof-theoretic incompleteness issue since their learning objective is soft via already "successfully" approximated optimization and regularization.
GDL leverages data structures that live in non-Euclidean spaces like graphs or hyperbolic spaces to capture complex relational and structural information which represent knowledge in distributed continuous more efficient ways rather than through discrete symbolic rules, potentially offering a more flexible epistemic framework than traditional logic-based systems. However, even if GDL models can capture knowledge implicitly and robustly, they still rely on digital computers and therefore, in principle, remain subject to the broad computability limits that Gödel’s theorem or its computationally equivalent Turing's Halting problem hints at.
In short GDL cannot escape Gödel’s incompleteness in any formal sense, but can sidestep it in practical applications due to finite training steps accomplishing a soft objective. Having said that, in advanced AGI there're AI-complete problems and Gödel machine exploring Gödel's incompleteness in some theoretical research.
In the field of artificial intelligence (AI), tasks that are hypothesized to require artificial general intelligence to solve are informally known as AI-complete or AI-hard. Calling a problem AI-complete reflects the belief that it cannot be solved by a simple specific algorithm... DeepMind published a work in May 2022 in which they trained a single model to do several things at the same time. The model, named Gato, can "play Atari, caption images, chat, stack blocks with a real robot arm and much more, deciding based on its context whether to output text, joint torques, button presses, or other tokens." Similarly, some tasks once considered to be AI-complete, like machine translation, are among the capabilities of large language models.
A Gödel machine is a hypothetical self-improving computer program that solves problems in an optimal way. It uses a recursive self-improvement protocol in which it rewrites its own code when it can prove the new code provides a better strategy. The machine was invented by Jürgen Schmidhuber (first proposed in 2003), but is named after Kurt Gödel who inspired the mathematical theories... The Gödel machine is often compared with Marcus Hutter's AIXI, another formal specification for an artificial general intelligence. Schmidhuber points out that the Gödel machine could start out by implementing AIXItl as its initial sub-program, and self-modify after it finds proof that another algorithm for its search code will be better... The Gödel machine has limitations of its own, however. According to Gödel's First Incompleteness Theorem, any formal system that encompasses arithmetic is either flawed or allows for statements that cannot be proved in the system. Hence even a Gödel machine with unlimited computational resources must ignore those self-improvements whose effectiveness it cannot prove.