There is a youtube about generating images with features designed to "poison" generative AI trained on those images.
This technique could potentially be used by anyone who was worried about their content being harvested by AI's. On any website distributing such content one could have "honeypot items/scraper trap" that is targeted at distorting the creative content. This could be images on an art website, as shown in the video. This could be discordant sound on a music distribution site, badly written text on an author's website or fake news on a news website. It is at least conceivable that this would make scraping the website for training data counter-productive, and so protect the content.
Would there be any legal issues with doing this? One would be intentionally causing "damage" to a computer system, which means one may consider Computer Misuse Act 1990 in the UK and Computer Fraud and Abuse Act in the US. One could imagine that for a web site accessible globally one should consider all jurisdictions.