Suppose I have a continuous stream of events - say the price of a stock.
If I have a trained recurrent network, then at any point in time, we have the predicted value and the actual value.
It seems intuitive that I should be able to use the actual value to train the model on an on-going basis?
However, all the theory I have read strictly splits the training from the predictive phase? During the predictive phase the weights are held constant. Is there any reason why that is?