4

I'm working on a problem that involves an RL agent with very large states. These states consist of several pieces of information about the agent. The states are not images, so techniques like convolutional neural networks will not work here.

Is there some general solutions to reduce/compress the size of the states for reinforcement learning algorithms?

nbro
  • 42,615
  • 12
  • 119
  • 217
Saeid Ghafouri
  • 125
  • 1
  • 6

1 Answers1

1

Compression will be lossy, some detailed features in the state will be off calculation.

A common technique might be using max-pool function or layer (before feeding to policy network if RL here is deep RL).

Max-pooling is very lossy, use some other classic compression algos such as Zip, Rar but using these classic no-loss compression is weird and extremely slow in the model pipeline.

Possible solution if allowing lossy data, commonly: Use max-pool (giving out high contrast data), average-pool (giving out blurred data).

To keep data intact, TensorFlow can compress tensors: "only sacrificing a tiny fraction of model performance. It can compress any floating point tensor to a much smaller sequence of bits."
See: https://github.com/tensorflow/compression

Dan D
  • 1,318
  • 1
  • 14
  • 39