2

In recent years, we have seen quite a lot of impressive display of Deep Neural Network (DNN), as demonstrated most famously by AlphaGo and its cousin programs.

But if I understand correctly, deep neural network is just a normal neural network with a lot of layers. We know about the principles of the neural network since the 1970s (?), and a deep neural network is just the generalization of a one-layer neural network to many.

From here, it doesn't seem like the recent explosion of DNN has anything to do with a theoretical breakthrough, such as some new revolutionary learning algorithms or particular topologies that have been theoretically proven effective. It seems like DNN successes can be entirely (or mostly) attributed to better hardware and more data, and not to any new theoretical insights or better algorithms.

I would go even as far as saying that there are no new theoretical insights/algorithms that contribute significantly to the DNN's recent successes; that the most important (if not all) theoretical underpinnings of DNNs were done in the 1970s or prior.

Am I right on this? How much weight (if any) do theoretical advancements have in contributing to the recent successes of DNNs?

nbro
  • 42,615
  • 12
  • 119
  • 217
Graviton
  • 261
  • 3
  • 8

1 Answers1

1

The first neural network machine was the stochastic neural analog reinforcement calculator (SNARC), built in the 1950s. As you can see, it's pretty old. After that, there were several advances regarding backpropagation and the vanishing gradient problem. However, the ideas itself are not novel. Simply put, we have the data and processing power today that we did not have back then.

You could look at the Wikipedia timeline.

nbro
  • 42,615
  • 12
  • 119
  • 217
Lively
  • 11
  • 1