4

I'm working on my own implementation of NEAT algorithm based on the original 2002 paper called "Efficient Reinforcement Learning through Evolving Neural Network Topologies" (by Kenneth O. Stanley and Risto Miikkulainen). The way the algorithm is designed it may generate loops in connection of hidden layer. Which obviously will cause difficulties in calculating the output.

I have searched and came across two types of approaches. One set like this example claim that the value should be calculated like a time series usually seen in RNNs and the circular nodes should use "old" values as their "current" output. But, this seems wrong since the training data is not always ordered and the previous value has nothing to do with current one.

A second group like this example claim that the structure should be pruned with some method to avoid loops and cycles. This approach apart from being really expensive to do, is also against the core idea of the algorithm. Deleting connections like this may cause later structural changes.

I my self have so far tried setting the unknown forward values as 0 and this hides the connection (as whatever weight it has will have no effect on the result) but have failed also for two reasons. One is my networks get big quickly destroying the "smallest network required" idea and also not good results.

What is the correct approach?

nbro
  • 42,615
  • 12
  • 119
  • 217
Emad
  • 183
  • 1
  • 10

3 Answers3

2

NEAT does not employ a feed forward concept and also it does not take any special action to avoid loops.

Network is evaluated in non-recursive model. The only non-deterministic loop, the evaluation has is the loop for activating all the outputs. Pseudo code is something like this,

Until all the outputs are active
    for all non-sensor nodes
        activate node
        sum the input
    for all non-sensor and active nodes
        calculate the output

NOTE1: You can use a defensive mechanism (like a counter) to avoid a infinite loop

NOTE2: When summing the input, outputs of nodes which were at least evaluated/calculated once are considered, otherwise their outputs are assumed to be zero.

This is the note from the author of NEAT about identifying loops,

Note that checking networks for loops in general in not necessary and therefore I stopped writing this function

bool Network::integrity()

Morpheus
  • 170
  • 1
  • 5
1

You can use a feed forward style network, so that every node outputs to a higher node except output nodes. This will eliminate connection loops.

Terry T.
  • 339
  • 2
  • 8
0

This is what I do:

During evaluation, I add visited counters to nodes and keep the paths.

If we go [1i-5h-7h-4o-5h], 5h is visited twice, so

I detect the loop [5h-7h-4o-5h] and mark the [4o-5h] connection as a loop.

Then I let it continue evaluating.

[1i-5h-7h-4o-5h-7h-4o] and 4o can't go to 5h again because it was marked as a loop.

This way you don't avoid or ignore loops, you add them to your evaluation as a kind of memory.

This is consistent with the stanley 02 paper

Introduction Paragraph 1

In addition, memory is easily repre-
sented through recurrent connections in neural networks, making NE a natural choice
for learning non-Markovian tasks (Gomez and Miikkulainen, 1999, 2002).

He references recurrent connections being used as "memory".

Page 122 Figure 8 subtext:

A NEAT solution to the DPNV problem. This clever solution works by taking
the derivative of the difference in pole angles. Using the recurrent connection to itself,
the single hidden node determines whether the poles are falling away or towards each
other.

So this recurrent connection should not be avoided, in fact it is required for the given solution.

This is why I evaluate them once.

nurettin
  • 101
  • 1