7

Evolutionary algorithms are mentioned in some sources as a method to train a neural network (finding weights, not hyperparameters). However, I have not heard about one practical application of such an idea yet.

My question is, why is that? What are the issues or limitations with such a solution that prevented it from practical use?

I am asking because I am planning on developing such an algorithm and want to know what to expect and where to put most attention.

nbro
  • 42,615
  • 12
  • 119
  • 217
GKozinski
  • 1,290
  • 11
  • 22

1 Answers1

6

The main evolutionary algorithm used to train neural networks is Neuro-Evolution of Augmenting Topoloigies, or NEAT. NEAT has seen fairly widespread use. There are thousands of academic papers building on or using the algorithm.

NEAT is not widely used in commercial applications because if you have a clean objective function, a topology that is optimized for gradient decent via backpropogation, and an implementation that is highly optimized for a GPU, you are almost certainly going to see better, faster, results from a conventional training process. Where NEAT is really useful is if you want to do something weird, like train to maximize novelty, or if you want to try to train neurons that don't have cleanly decomposable gradients. Basically, you need to have any of the usual reasons you might prefer an evolutionary algorithm to hill-climbing approaches:

  1. You don't have a clean mapping from loss function to individual model components.
  2. Your loss function has many local maximia.
John Doucette
  • 9,452
  • 1
  • 19
  • 52