1

In some research papers, I have seen that, for training the autoencoders, instead of giving the non-anomalous input images, they add some anomalies to the normal input images, and train the auto-encoders with this anomalous images.

And, during testing, they take and pass an anomalous image and get the output, take their pixel-wise difference, and, based on a threshold, they detect if it is an anomaly or not.

If we are adding noise or anomalies to the training set, are we generalizing the model's capability to recreate the original normal input?

How does it help to detect the anomaly?

My understanding is that we should train using only normal data without adding any noise, then give an anomaly image at test, take the loss as a threshold.

nbro
  • 42,615
  • 12
  • 119
  • 217
ans_ak
  • 11
  • 1

0 Answers0