5

It is well known that human reasoning, after evolving for at least several thousand years, has gradually transformed from natural language reasoning to formal reasoning. In modern science, a significant indicator of a discipline's maturity is whether its main theories can be formalized (or axiomatized, mathematized). At the inception of AI, research on reasoning was primarily based on formal reasoning, which surprisingly achieved remarkable success early on. For instance, programs developed by Newell, Simon and Shaw in 1956, as well as Wang in 1960, proved almost all theorems in "Principia Mathematica" by Whitehead and Russell. Nowadays, state-of-the-art symbolic SAT solvers have become so sophisticated that attempting to achieve SAT solving via machine learning approaches cannot compete with them (Daniel Selsam et al. ICLR 2019, Learning a SAT Solver from Single-Bit Supervision):

As we stressed early on, as an end-to-end SAT solver the trained NeuroSAT system discussed in this paper is still vastly less reliable than the state-of-the-art. We concede that we see no obvious path to beating existing SAT solvers.

Given this, I'm wondering why we overlook the achievements of traditional symbolic reasoning and invest so much effort into implementing natural language reasoning within large language models. What are the advantages of natural language reasoning over formal reasoning?

jario
  • 53
  • 5

1 Answers1

1

There's mainly deductive formal reasoning, forward/backward chaining, and production rules chunking focused in the traditional symbolic AI field based on various mathematical logic theories, type theories, knowledge base, semantic ontology/network, and computability/complexity theories such as your SAT solver for Boolean satisfiability problem which is NP-complete in general by the Cook–Levin theorem in complexity theory. Obviously these specialized and powerful symbolic AI solvers or theorem provers are essential and important for many specialized fields such as pure mathematics, proof assistant, formal program and software verification, etc.

However, many real world consumer-facing AI applications don't require above sophisticated formal inferencing engines, yet require laymen-understandable flexible semantic composition and continuation when prompted, and many "reasonings" in natural language are implicit, informal, inductive, heuristically (no need for best) explainable, generalizable, and context driven, etc. All these requirements are turned out to be better solved by bottom-up learning-based contextually-embedded LLMs which are inherently incompatible with formal top-down reasoning models. Therefore it's not true that AI industry overlooks the achievements of traditional symbolic reasoning, foundational LLMs just seem to have much wider applicable horizon than a few highly specialized fields where symbolic reasoning is a must.

Of course people are constantly trying to leverage each formalism's advantage together to build some hybrid model like NeuroSAT, and certainly every kind of AI model, be it learning based or reasoning based, has their own use cases.

cinch
  • 11,000
  • 3
  • 8
  • 17