3

I am familiar with the currently popular neural network in deep learning, which has weights and is trained by gradient descent.

However, I found many papers that were popular in the 1980s and 1990s. These papers have titles like "Neural networks to solve optimization problems". For example, Hopfield first use this name, and they used "Neural network" to solve linear programming problems [1]. Later, Kennedy et.al used "Neural network" to solve nonlinear programming problems [2].

I summarize the difference between the current popular neural network and the "Neural networks":

  1. They do not have parameter weights and bias to train or to learn from data.
  2. They used a circuit diagram to present the model.
  3. The model can be simplified as an ODE system and has a Lyapunov function as objective.

Please take a look at these two papers in the 1980s:

  1. [Neurons with graded response have computational properties like those of two state neurons][2] (J.J. Hopfield)

  2. Neural Networks for Non-linear Programming (M.P Kennedy & L.O Chua)

Reference:

[1]: J. J. Hopfield, D. W. Tank, “neural” computation of decisions in optimization problems, Biological265 cybernetics 52 (3) (1985) 141–152.

[2]: M. P. Kennedy, L. O. Chua, Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems 35 (5) (1988) 554–562.

Mithical
  • 2,965
  • 5
  • 28
  • 39
dawen
  • 131
  • 3

2 Answers2

3

In the early days of neural networks the theorists and practitioners were educated in mathematics, psychology, neurophysiology, electrical engineering, and neurobiology. Computer science was still in its infancy. The first neural networks were modeled as electrical circuits.

There is evidence of this in the 1943 paper by Warren McCulloch and Walter Pitts [1], and a 1956 paper by Rochester et al. [2].

The latter paper uses terms such as 'circuits' and 'switching'. One idea in the paper is explained in terms of a "Eccles-Jordan Flip Flop circuit" although there are no drawings. Nathanial Rochester had designed the IBM 701k [3] and "led the first effort to simulate a neural network" [4]

Brain structure was discussed in terms of 'neural circuits' as early as 1937 [5].

I am not sure when the first electrical circuit diagram appeared in publication, but it makes sense that early neural network designers, would have thought of their implementation as such.

References:

Brian O'Donnell
  • 1,997
  • 9
  • 23
2

Well, the goal of any paper is to allow the reader to understand what the author is trying to describe.

A lot of people have a lot of experience looking at circuit diagrams and figuring out those circuits will do. For these people, a circuit diagram may be the clearest and easiest way for them to understand how a particular thing works. So, it makes sense that an author would include a circuit diagram, in order to make it easy for those people to understand the concepts.

There are two particular reasons why a circuit diagram is especially likely to show up in a paper about neural networks:

The first reason is that analog circuits are closely related to ordinary differential equations, and digital circuits are closely related to sequential logic. So, if you have a neural network or something that uses ordinary differential equations or sequential logic, then a circuit diagram might be a simple way to express how it works.

The second reason is that a lot of researchers who are familiar with computers are also familiar with electronic circuits. This was especially true in the early days of computers, when people had to be familiar with electronics, math, or both in order to understand how to program a computer.

Sophie Swett
  • 151
  • 6