0

Why couldn't you take the image an AI is given and apply several different random noise filters to the image and take the democratically most common response and use that for the output of the AI. As it stands adversarial attacks require the noise they add to the image to be very specific, but if you just applied light noise that disrupted the attack while using multiple versions so you don't disrupt yourself, couldn't that counter the attack? And since the noise is runtime random the attacker would have no way of working around it.

Ethan
  • 103
  • 6

1 Answers1

1

Adding noise to the signal after training:

It might disrupt attacks unaware of the noise, no guarantees. Hopefully the adversary doesn't wise up to the noise and train their model with the noise in mind. Enter a cat and mouse game. Noise of course can reduce accuracy of some models.

Adding noise to the signal during training:

Accuracy in the face of noise is a bigger problem than accuracy without noise. Bigger problems generally requires a bigger models. Depending on the platform there may not be budget for a bigger model (car ECU, phone, etc).

And of course it's hard to tell the difference between generalizing to all noise versus generalizing against specific noise. If the model is not generalized against all noise, there is possibly noise that still exists that causes undesirable behavior. Welcome back to the previously discussed cat and mouse game.

foreverska
  • 2,347
  • 4
  • 21