2

The universal approximation theorem says that MLP with a single hidden layer and enough number of neurons can able to approximate any bounded continuous function. You can validate it from the following statement

Multilayer Perceptron (MLP) can theoretically approximate any bounded, continuous function. There's no guarantee for a discontinuous function.

We can express any MLP in terms of algebraic expressions. And the expressions can be considered as symbolic-AI.

So, can I infer that symbolic AI algorithms can theoretically approximate any bounded continuous function?

If not, then why can't there be a one-one mapping between MLP and symbolic-AI algorithm?

nbro
  • 42,615
  • 12
  • 119
  • 217
hanugm
  • 4,102
  • 3
  • 29
  • 63

2 Answers2

2

It depends on how one defines connectionist AI. If we define connectionist AI as neural networks that allow for arbitrary real-valued parameters (as required by the universal approximation theorem), it theoretically possesses greater computational capabilities compared to symbolic AI. Unfortunately, real-valued neural networks are only an idealized computational model. To my knowledge, there is no evidence that real-valued neural networks can be realized in the physical world because:

  1. Most real numbers do not have finite representations (in this sense, neural networks cannot be regarded as algebraic expressions); according to modern physics (quantum mechanics), such real numbers cannot be stored;

  2. Operations on real numbers (even very simple cases, like summing two real numbers) are themselves uncomputable. Therefore, if you accept the Church-Turing thesis, even the inference of real-valued neural networks cannot be physically realized.

If the parameters of neural networks are restricted to rational numbers, then neural networks will always have finite representations, and the inference with such neural networks is normally computable in deterministic polynomial time. In this case, based on a recent work:

Zhang et al. 2024: A Theory of Formalisms for Representing Knowledge (available at https://arxiv.org/abs/2412.11855)

there exists some lightweight symbolic formalism of knowledge representation (e.g., a fragment of first-order logic) that is recursively isomorphic (consequently, one-one mapping) to the class of rational neural networks, and thus the capability of connectionist AI in this case would be strictly weaker than that of symbolic AI.

Jorge
  • 130
  • 7
1

Are the capabilities of connectionist AI and symbolic AI the same?

No, not usually. Why not usually? Neural networks (connectionist AI) are usually used for inductive reasoning (i.e. the process of generalizing given a finite set of observations), while symbolic AI is usually used for deduction (i.e. to logically derive conclusions from premises).

What is inductive reasoning? Let's say that all the birds that you have observed so far in your life fly, so your inductive thought is that all birds must fly, although you haven't seen all birds, so there could be exceptions (like penguins).

What is deductive reasoning? Let's say that you know that all humans are mortal (there's no exception). You know that Socrates is a human. So, you logically deduce that Socrates is mortal. If that was not the case, then either the premise was wrong or maybe Socrates is not a human.

We can express any MLP in terms of algebraic expressions. And the expressions can be considered as symbolic-AI.

So, can I infer that symbolic AI algorithms can theoretically approximate any bounded continuous function?

Now, if MLPs were a subset of symbolic AI, then we could conclude that symbolic AI can approximate bounded continuous functions. However, the definition of symbolic AI is usually restricted to knowledge-based and logic-based systems, so systems that write (e.g. using propositional logic) premises or facts (in knowledge bases) to deduce conclusions (other facts) from them. So, although $f(x) = \sigma(ax + b)$ (which can represent e.g. a perceptron) is a function with symbols $a$, $x$, $b$ and $\sigma(\cdot)$, these symbols are not used in the same way as the symbols in symbolic AI. They are variables that do not represent premises or conclusions.

If not, then why can't there be a one-one mapping between MLP and symbolic-AI algorithm?

I don't know if there can be a one-to-one mapping between some symbolic AI and neural networks.

However, it is possible to combine the two approaches. For example, in the context of knowledge graphs, which can be viewed as a way to represent facts and relations between those facts as a graph, we can learn embeddings (using machine learning), which can later be used to perform inductive reasoning on the knowledge graph.

There are other examples of attempts to combine symbolic AI with machine learning and neural networks. A famous example is the Markov Logic Network (MLN) (by Richardson and Domingos), which combines first-order logic with probabilistic graphical models. A related approach is used, for example, used in OpenCog, which is a software platform for AI and AGI. In fact, I think it is widely believed that inductive and deductive reasoning are both necessary for an AGI. The AGI needs inductive reasoning for situations that involve uncertainty (most cases) and deductive reasoning for the remaining cases (e.g. doing math, i.e. we expect an AGI to be able to prove theorems, as we do). Another example is a combination of knowledge graphs with MLNs.

nbro
  • 42,615
  • 12
  • 119
  • 217