4

Suppose I have a continuous stream of events - say the price of a stock.

If I have a trained recurrent network, then at any point in time, we have the predicted value and the actual value.

It seems intuitive that I should be able to use the actual value to train the model on an on-going basis?

However, all the theory I have read strictly splits the training from the predictive phase? During the predictive phase the weights are held constant. Is there any reason why that is?

nbro
  • 42,615
  • 12
  • 119
  • 217
Darren
  • 143
  • 2

1 Answers1

3

Yes, there's some work on this topic.

You can search for terms like

  • continual learning
  • incremental learning
  • online learning

for time-series data or for whatever you need.

For an intro on continual learning with neural networks, I recommend the paper Continual Lifelong Learning with Neural Networks: A Review (2019), but if you're looking for something more specific, you can search for a more specific paper or tool, like Continual Learning Long Short Term Memory (which I have not read yet) or the river and avalanche Python libraries, which might also be useful.

nbro
  • 42,615
  • 12
  • 119
  • 217