For questions related to adversarial machine learning, which is a branch of machine learning focused on the study of adversarial examples, which are malicious inputs designed to fool machine learning models.
Questions tagged [adversarial-ml]
36 questions
88
votes
9 answers
How is it possible that deep neural networks are so easily fooled?
The following page/study demonstrates that the deep neural networks are easily fooled by giving high confidence predictions for unrecognisable images, e.g.
How this is possible? Can you please explain ideally in plain English?
kenorb
- 10,525
- 6
- 45
- 95
31
votes
7 answers
Is artificial intelligence vulnerable to hacking?
The paper The Limitations of Deep Learning in Adversarial Settings explores how neural networks might be corrupted by an attacker who can manipulate the data set that the neural network trains with. The authors experiment with a neural network meant…
Surya Sg
- 495
- 4
- 13
29
votes
8 answers
Is there any research on the development of attacks against artificial intelligence systems?
Is there any research on the development of attacks against artificial intelligence systems?
For example, is there a way to generate a letter "A", which every human being in this world can recognize but, if it is shown to the state-of-the-art…
Der Fänger im Roggen
- 433
- 4
- 9
11
votes
2 answers
What tools are used to deal with adversarial examples problem?
The problem of adversarial examples is known to be critical for neural networks. For example, an image classifier can be manipulated by additively superimposing a different low amplitude image to each of many training examples that looks like noise…
Ilya Palachev
- 299
- 2
- 11
6
votes
1 answer
What is the relationship between robustness and adversarial machine learning?
I have been reading a lot of articles on adversarial machine learning and there are mentions of "best practices for robust machine learning".
A specific example of this would be when there are references to "loss of efficient robust estimation in…
boomselector
- 135
- 7
5
votes
3 answers
What is an adversarial attack?
I'm reading this really interesting article CycleGAN, a Master of Steganography. I understand everything up until this paragraph:
we may view the CycleGAN training procedure as continually mounting an adversarial attack on $G$, by optimizing a…
Cyclist
- 51
- 1
5
votes
2 answers
Can artificial intelligence applications be hacked?
Can artificial intelligence (or machine learning) applications or agents be hacked, given that they are software applications, or are all AI applications secure?
ME.
- 115
- 1
- 6
5
votes
1 answer
Can CNNs be made robust to tricks where small changes cause misclassification?
I while ago I read that you can make subtle changes to an image that will ensure a good CNN will horribly misclassify the image. I believe the changes must exploit details of the CNN that will be used for classification. So we can trick a good CNN…
Ted Ersek
- 153
- 2
5
votes
1 answer
Isn't deep fake detection bound to fail?
Deep fakes are a growing concern: the ability to credibly alter a video may have great (negative) impacts on our society. It is so much of a concern, that the biggest tech companies launched a specific challenge:…
Lucas Morin
- 262
- 2
- 13
4
votes
1 answer
What are causative and exploratory attacks in Adversarial Machine Learning?
I've been researching Adversarial Machine Learning and I know that causative attacks are when an attacker manipulates training data. An exploratory attack is when the attacker wants to find out about the machine learning model. However, there is not…
boomselector
- 135
- 7
3
votes
0 answers
Why do adversarial attacks work on CNNs if they classify images as humans do?
A common illustration on how CNN works is as follows: https://www.researchgate.net/figure/Learned-features-from-a-Convolutional-Neural-Network_fig1_319253577. It seems to suggest that CNN in particular classifies images in a similar manner as human…
Sam
- 205
- 1
- 5
3
votes
1 answer
How do I poison an SVM with manifold regularization?
I'm working on Adversarial Machine Learning, and have read multiple papers on this topic, some of them are mentioned as follows:
Poisoning Attacks on SVMs: https://arxiv.org/pdf/1206.6389.pdf
Adversarial Label Flips on Support Vector…
boomselector
- 135
- 7
2
votes
1 answer
Why do adversarial attack transfer well?
I have read (*) that a common technique to attack a black box AI system based on a neural network is to use it to train a surrogate model to make the same classifications as the black box one.
Once this is done, one can look for adversarial examples…
Weier
- 131
- 4
2
votes
2 answers
Why adversarial images are not the mainstream for captchas?
In order to check, whether the visitor of the page is a human, and not an AI many web pages and applications have a checking procedure, known as CAPTCHA. These tasks are intended to be simple for people, but unsolvable for machines.
However, often…
spiridon_the_sun_rotator
- 2,852
- 12
- 17
2
votes
1 answer
Is there a taxonomy of adversarial attacks?
I am a medical doctor working on methodological aspects of health-oriented ML. Reproducibility, replicability, generalisability are critical in this area. Among many questions, some are raised by adversarial attacks (AA).
My question is to be…
user2478159
- 23
- 4