5

I already know deep RL, but to learn it deeply I want to know why do we need 2 networks in deep RL. What does the target network do? I now there is huge mathematics into this, but I want to know deep Q-learning deeply, because I am about to make some changes in the deep Q-learning algorithm (i.e. invent a new one). Can you help me to understand what happens during executing a deep Q-learning algorithm intuitively?

nbro
  • 42,615
  • 12
  • 119
  • 217
dato nefaridze
  • 882
  • 10
  • 22

1 Answers1

8

In DQN that was presented in the original paper the update target for the Q-Network is $\left(r_t + \max_aQ(s_{t+1},a;\theta^-) - Q(s_t,a_t; \theta)\right)^2$ were $\theta^-$ is some old version of the parameters that gets updated every $C$ updates, and the Q-Network with these parameters is the target network.

If you didn't use this target network, i.e. if your update target was $\left(r_t + \max_aQ(s_{t+1},a;\theta) - Q(s_t,a_t; \theta)\right)^2$, then learning would become unstable because the target, $r_t + \max_aQ(s_{t+1},a;\theta)$, and the prediction, $Q(s_t,a_t; \theta)$, are not independent, as they both rely on $\theta$.

A nice analogy I saw once was that it is akin to a dog chasing it's own tail - it will never catch it because the target is non-stationary; this non-stationarity is exactly what the dependence between the target and the prediction cause.

nbro
  • 42,615
  • 12
  • 119
  • 217
David
  • 5,100
  • 1
  • 11
  • 33