0

I am currently trying to train a neural network to predict certain input parameters for a linear program based on an input dataset of system measurements related to these parameters. The workflow is something like this:

System measurements --> Neural Network --> Input parameters --> Linear program --> output.

Loss = (output - measured output)^2

Where the loss is the MSE from the output and output measurements of the system (so for each system measurement, there is a measured system output value). The aim of this workflow is the learn the (unknown) relationship between the system measurements and the input parameters, based on the input parameters ability to predict the system output.

As you can see, there is a problem in my workflow, namely that I have a non-differentiable linear program that makes it impossible to learn the optimal relationship between the system measurements and the input parameters in a straightforward way.

One option I have would be to surrogate the linear program with a neural network, which I think is not the best way to proceed as I would rather stick to a interpretable linear model.

Is there any ideal way to approach such a theoretical challenge?

Many thanks in advance.

0 Answers0