In some sense, Go returned to this category in 2023 with the discovery of an inherent weakness in superhuman Go-playing AIs. There is a particular pattern that the AI consistently misinterprets, so forcing that kind of position leads the AI to play blunders that are clear even to a Go novice. Figure J.2 in the paper shows an example. The idea is to create a dead group and let the AI enclose it, but not capture it (which is perfectly correct behaviour). The AI them mistakenly considers the enclosing group unconditionally alive, even though for that it would have to actually capture the dead group. It then lets the enclosing group be captured. The loss involves a misread capture race; the particular blunder move is clear even to a Go novice.
Importantly, though the strategy was discovered through training an adversarial AI, it can be played by an unaided human. Also importantly, this is not a trivially fixable bug. The main author of KataGo confirms this here, pointing out also that KataGo, a commonly-used superhuman Go AI, sometimes misreads specific positions that arise within regular gameplay when they get out of the space explored in self-play. This means that superhuman Go AIs require various ad hoc additions to their training. The original paper discvered two exploits, one that relied on a particular ruleset and the more legitimate one described above, but it's possible that there are yet more exploits that would be found next if this got patched.
Since there aren't big human-versus-AI events in Go these days, it's hard to evaluate how (un)fixable this problem is, but professional Go players take it seriously. Here is a dan-level professional playing the strategy out and reflecting on what it means for the game as a piece of culture.
As an aside, I find this fascinating; it's what got me into Go, through this essay (which I also recommend, even if you don't know Go at all). I learned the game to understand what this kind of exploit means, and I think I've figured it out. It shows one component of human intelligence that we have not replicated so far in AI: humans have a kind of "constant vigilance". A human encountering this strategy would think "this is weird, my enemy is up to something". They would spot where the trap seemed to lie, and play the obvious safeguarding move to block it, even if they never saw anything like it before. I have no idea how humans do this. I guess it shows that intelligence is much more of an ill-defined, ad-hoc, improvisational kind of thing than generally assumed.