NEAT essentially as a subset of GAs excels in scenarios where smooth gradient-based learning struggles, such as non-differentiable network architecture optimization objectives targeted by NEAT. Similar to traditional GAs, NEAT involves population-based training which is inherently computationally expensive without a major breakthrough of non-differentiable optimization theory, compared to gradient-based optimizers like Adam. This is a less competitive area compared with deep learning like image classification or text generation.
Having said that, there' some big changes for NEAT in its 20 years history so far. NEAT struggled with scalability for high-dimensional problems such as image processing or multi-agent RL systems. Stanley et al in 2009 proposed HyperNEAT to map spatial patterns across a substrate instead of evolving weights directly, enabling it to handle larger neural networks efficiently. Also traditional NEAT struggled to evolve deep architectures effectively, Miikkulainen et al. in 2017 proposed CoDeepNEAT to bridge that gap and enable the design of more modular and scalable deep neural architectures which include layers similar to CNNs and LSTMs.
As deep learning has scaled up to more challenging tasks, the architectures have become difficult to design by hand. This paper proposes an automated method, CoDeepNEAT, for optimizing deep learning architectures through evolution. By extending existing neuroevolution methods to topology, components, and hyperparameters, this method achieves results comparable to best human designs in standard benchmarks in object recognition and language modeling. It also supports building a real-world application of automated image captioning on a magazine website. Given the anticipated increases in available computing power, evolution of deep networks is promising approach to constructing deep learning applications in the future.