10

I am considering pursuing a career in AI (currently have an undergraduate background in Philosophy/Computer Science) and have been taking some time to research particular topics. One class of method that piqued my interest was the genetic algorithm. For those who are unfamiliar I can give a brief rundown. These are local search methods that maintain a population of "organisms" (potential models or solutions to the problem) that are encoded in some manner (typically as simple binary strings). A "fitness" function is defined over the organisms to determine to what extent it solves the problem. At each iteration of the algorithm, organisms undergo random mutation and reproduction. The former involves simple bit flips of randomly selected indices, while the latter involves pairing off organisms by fitness and creating offspring in the hopes that they will "inherit" their parents' fitness.

I am curious for anyone who works in these fields or has specialized knowledge to what extent genetic algorithms are still researched and considered relevant. Obviously I can do some research online and browse academic sources but I'm interested in hearing from people "on the ground" so to speak. My immediate thought as a potential avenue for exploration is to determine how exactly the reproduction process can actually accentuate the fitness of the parents. It seems clear enough that in many cases reproduction may do no better than mutation in so far as the average fitness of the offspring doesn't diverge significantly from that of the parents. To illustrate this, imagine a case study where the fitness function is simply the proportion of 1s in an organism, and the reproduction operation simply swaps selected indices between two parents. Obviously the expected fitness of the offspring will always be the same as the average fitness of the parents, since no 1s are added or removed in this process. Multi-modal fitness functions may be something else to consider.

Thanks!

bishop-fish
  • 131
  • 5

4 Answers4

7

It’s a bit of a niche area, but it’s certainly still active. And it can be hard to predict importance anyway. Reinforcement learning was I think similarly niche until 10 or so years ago when it became a huge part of a lot of deep learning methods.

I’ve been out of academia for a while now, but the sort of thing you’re referring to is really around theoretical analysis of evolutionary algorithms. What conditions make a particular choice of operators work better or worse? Can you define a taxonomy of problem types such that you can draw conclusions about when crossover will be effective? That sort of thing. It’s still, to my knowledge, mostly unknown. There was some early work that you could google up under the term "fitness landscape analysis" or "exploratory landscape analysis". I wrote a few small papers in grad school looking at extending some of these concepts to the case of multiobjective problems. There has been a lot of cool work using machine learning to try to learn relations between problem structure and algorithm behavior.

I just recently saw this paper cross my radar (https://www.sciencedirect.com/science/article/pii/S2210650225000525?dgcid=rss_sd_all), which I haven’t read yet, but skimming seems to be telling the story that none of these approaches have really made a ton of progress in answering the burning question —- I have a problem to solve, what algorithm will be good at it?

deong
  • 681
  • 3
  • 4
6

A few years ago, I was part of a research group that used evolutionary algorithms (not just genetic algorithms) to test software and machine learning models. I myself tried to use genetic programming in the context of reinforcement learning, but not very successfully, although I probably did not dive enough into the problem.

I think that conceptually these algorithms are interesting, but they can also be computationally intensive (but it's the same for ML). I also have the impression that, like other approaches, they might be more effective when the solution space is (loosely speaking) smooth, i.e. when you can refine/improve the current solution with small steps, not when all solutions are bad and only one is good and somehow you must be lucky to find it. But I didn't really get into the theoretical analysis of evolutionary algorithms.

nbro
  • 42,615
  • 12
  • 119
  • 217
6

Yes, genetic algorithms are still studied. However, I don't think they are generally considered to be at the forefront of research or among the most exciting, prestigious, highest-impact research directions right now. Of course opinions are going to vary; this is a broad community, with many perspectives, so not everyone will agree with this perspective.

I have the impression that a common attitude is that genetic algorithms are a reasonable baseline heuristic if you have no idea how to solve a problem, but often/usually there are going to be better methods to solve any particular problem, and as such the applicability of new methods for genetic algorithms is somewhat modest.

One area where genetic algorithms are used in practice is in grey-box fuzzing (e.g., the AFL fuzzer). There are probably many others.

D.W.
  • 407
  • 4
  • 9
1

Deep Learning pioneer Francois Chollet has stated that they are an under-explored area for addressing problems (such as his ARC benchmark) that are hard for DL/LLMs (https://x.com/fchollet/status/1854246566753890733) and that it's possible that they will rise to greater prominence.

They are still an active research area, with several high-profile conferences attended by thousands (e.g. https://gecco-2025.sigevo.org/HomePage).

NietzscheanAI
  • 7,286
  • 24
  • 38