10

Are there neural networks that can decide to add/delete neurons (or change the neuron models/activation functions or change the assigned meaning for neurons), links or even complete layers during execution time?

I guess that such neural networks overcome the usual separation of learning/inference phases and they continuously live their lives in which learning and self-improving occurs alongside performing inference and actual decision making for which these neural networks were built. Effectively, it could be a neural network that acts as a Gödel machine.

I have found the term dynamic neural network but it is connected to adding some delay functions and nothing more.

Of course, such self-improving networks completely redefine the learning strategy, possibly, single shot gradient methods can not be applicable to them.

My question is connected to the neural-symbolic integration, e.g. Neural-Symbolic Cognitive Reasoning by Artur S. D'Avila Garcez, 2009. Usually this approach assigns individual neurons to the variables (or groups of neurons to the formula/rule) in the set of formulas in some knowledge base. Of course, if knowledge base expands (e.g. from sensor readings or from inner nonmonotonic inference) then new variables should be added and hence the neural network should be expanded (or contracted) as well.

nbro
  • 42,615
  • 12
  • 119
  • 217
TomR
  • 903
  • 6
  • 18

2 Answers2

4

This article on Dynamically Expandable Neural Networks (DEN) (by Harshvardhan Gupta) is based on this paper Lifelong Learning with Dynamically Expandable Networks (by Jeongtae Lee, Jaehong Yoon, Eunho Yang, Sung Ju Hwang)

This presents 3 solutions to increase the capacity of the network if needed retaining whatever useful information from the old model and train the new model:

  • Selective retraining
  • Dynamic Network Expansion 
  • Network Split/Duplication

To me, it seems that such neural network is dynamic and improving. As such, they answer partially your question. If they don't sorry about that.

nbro
  • 42,615
  • 12
  • 119
  • 217
luvzfootball
  • 151
  • 2
  • 7
3

I mostly studied HMMs and such models are called Infinite HMMs in that specific domain.

I believe that what you are looking for is called Infinite Neural Networks. Not having access to scientific publications, I cannot really refer any work here. However, I found this GitHub repository: https://github.com/kutoga/going_deeper that provides some implementation and a document with multiple references.

Eskapp
  • 260
  • 1
  • 9