My sincere apologies if I am more asking for advice, rather than a solution to a specific problem. However, I am trying to survive in this new world as a software tester. I have been doing test automation for about ten years now and think it's vital for me to upgrade my skillset in order to stay relevant.
I have read some books on AI and LLMs recently and have been messing around with Llama on my local machine. A bulk of "under the hood" stuff is still going over my head, however, one area in this field intrigued me. Adversarial testing for AI/LLM models.
My question for the experts in the field: do you see this skill as "doable" by software testers? Or is it too highly specialized like cybersecurity/pen testing? I suppose a good comparison would be white box testing. Would I need to know the innerworkings of the models that I am looking at in order to implement any attacks? Or is it more like unit tests that only developers write- in this case, only the AI/ML engineers that can design attacks?
If it's something that one can learn, what would you recommend as a learning path? I have done some research and I do not see many university courses or other educational authorities teaching this specific subject as of yet. Thank you in advance.