4

In a recent arXiv paper (accepted by AAAI-25):

https://arxiv.org/abs/2412.11855 (Title: A Theory of Formalisms for Representing Knowledge)

the authors present a seemingly intriguing result, claiming that all universal (or natural and equally expressive) knowledge representation formalisms are recursively isomorphic. To establish this result, the authors developed a framework to capture the class of possible knowledge representation formalisms and define what constitutes a universal knowledge representation formalism.

I am wondering if this work has any limitations. More specifically:

  1. Is the framework proposed for knowledge representation formalisms defined in a reasonable and comprehensive manner?

  2. Is the claim of recursive isomorphisms among universal (or natural and equally expressive) knowledge representation formalisms correct? If so, what are its implications for AI?

  3. In the conclusion of this paper, the authors claim:

For the debate between symbolic AI and connectionist AI, the existence of recursive isomorphisms between knowledge representation formalisms (KRFs) implies that for any knowledge operator (e.g., gradient descent) in one KRF, we can effectively find an operator in another (isomorphic) KRF to perform the same transformation. From a theoretical perspective, all these representation methodologies either pave the way to AGI or none, with core challenges being universal and advancements in one methodology benefiting others.

However, it appears that knowledge representation formalisms based on neural networks have achieved a dominant position in contemporary AI. Therefore, does the above claim potentially overstate its significance?

Jorge
  • 130
  • 7

2 Answers2

2

I have carefully examined all the technical details of the paper and found no issues. The main conclusion can be seen as a generalization of Rogers' Equivalence Theorem, and the primary difficulty in the proof lies in establishing the existence of universal knowledge representation formalisms, which is clearly nontrivial. Furthermore, it is worth mentioning that this result has been published in the proceedings of AAAI-25.

Regarding the universality of the definition of "knowledge representation formalism" proposed in the paper, while the definition appears general, I cannot offer a definitive assessment due to my limited knowledge of traditional knowledge representation and reasoning. The authors may need to provide additional justification to demonstrate how established formalisms in the field — such as default logic, autoepistemic logic, and others — are indeed captured by this definition.

In addition, I am curious about how the results in this paper relate to the alignment phenomenon in representation learning, which can be found, e.g., in the following paper:

Loek van Rossem, Andrew M. Saxe: When Representations Align: Universality in Representation Learning Dynamics. CoRR abs/2402.09142 (2024).

Overall, the work seems to be significant, and the philosophical implication of the mentioned conclusion is worth further exploration.

nova
  • 180
  • 6
1

Indeed the theoretical recursive isomorphism of knowledge representation formalisms (KRFs) may overstate the theoretical equivalence, overlooking practical differences in efficiency, scalability, and applicability. Thus while the claim is intriguing, it may not adequately address the strengths of the dominant connectionist methods in AI especially for fast pattern recognition in perceptual applications. Each formalism has their own applicability.

With the rise of deep learning, the symbolic AI approach has been compared to deep learning as complementary "...with parallels having been drawn many times by AI researchers between Kahneman's research on human reasoning and decision making – reflected in his book Thinking, Fast and Slow – and the so-called "AI systems 1 and 2", which would in principle be modelled by deep learning and symbolic reasoning, respectively." In this view, symbolic reasoning is more apt for deliberative reasoning, planning, and explanation while deep learning is more apt for fast pattern recognition in perceptual applications with noisy data.

In fact there's significant effort from symbolic AI's KRF to try to build a large KB about the world, focusing on implicit knowledge such as Cyc. It's an example of limited practical applicability and scalability of the pure symbolic AI formalism.

Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge... Cyc's ontology grew to about 100,000 terms in 1994, and as of 2017, it contained about 1,500,000 terms. The Cyc knowledge base involving ontological terms was largely created by hand axiom-writing; it was at about 1 million in 1994, and as of 2017, it is at about 24.5 million. In 2008, Cyc resources were mapped to many Wikipedia articles... The Cyc inference engine performs general logical deduction. It also performs inductive reasoning, statistical machine learning and symbolic machine learning, and abductive reasoning... Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own.

cinch
  • 11,000
  • 3
  • 8
  • 17