9

Significant AI vs human board game matches include:

which demonstrated that AI challenged and defeated professional players.

Are there known board games left where a human can still win against an AI? I mean based on the final outcome of authoritative famous matches, where there is still the same board game where AI cannot beat a world champion of that game.

nbro
  • 42,615
  • 12
  • 119
  • 217
kenorb
  • 10,525
  • 6
  • 45
  • 95

4 Answers4

8

Not all games (or even board games) are computationally algorithmic. Even the least skilled player is likely to trounce the hottest pattern-matching algorithm in a game of Pictionary (for example).

If you want to say that the movement of pieces upon successful completion of a task is only ancelary to the object of the game, than your answer will be largely self-selecting. A sufficiently sophisticated algorithm will brute force a computational problem better than human intuition… eventually.

Robert Cartaino
  • 1,790
  • 1
  • 13
  • 17
6

For many years, the focus has been on games with perfect information. That is, in Chess and Go both of us are looking at the same board. In something like Poker, you have information that I don't have and I have information that you don't have, and so for either of us to make sense of each other's actions we need to model what hidden information the other player has, and also manage how we leak our hidden information. (A poker bot whose hand strength could be trivially determined from its bets will be easier to beat than a poker bot that doesn't.)

Current research is switching to tackling games with imperfect information. Deepmind, for example, has said they might approach Starcraft next.

I don't see too much difference between video games and board games, and there are several good reasons to switch to video games for games with imperfect information.

One is that if you want to beat the best human to be a major victory, there needs to be a pyramid of skill that human is atop of--it'll be harder to unseat the top Starcraft champion than the top Warcraft champion, even though the bots might be comparably difficult to code, just because humans have tried harder at Starcraft.

Another is that many games with imperfect information deal with reading faces and concealing information, which an AI would have an unnatural advantage at; for multiplayer video games, players normally interact with each other through a server as an intermediary and so the competition will be more normal.

Faizy
  • 1,144
  • 1
  • 8
  • 30
Matthew Gray
  • 4,272
  • 19
  • 28
2

Artificially intelligent computer programs should be able to be at the same level or beat humans at every game that we play. This is because games follow rules that are scriptable, and artificial intelligence is designed to focus on one specific game and learn from its failures. The difference between humans and artificial intelligence is that artificial intelligence focuses on one specific task like learning to master Go while our brain is dedicated to mastering multiple tasks like...living. Even Arimaa, a game designed to be difficult for artificially intelligent systems was beaten by a bot called Sharp: https://en.wikipedia.org/wiki/Arimaa.

0

In some sense, Go returned to this category in 2023 with the discovery of an inherent weakness in superhuman Go-playing AIs. There is a particular pattern that the AI consistently misinterprets, so forcing that kind of position leads the AI to play blunders that are clear even to a Go novice. Figure J.2 in the paper shows an example. The idea is to create a dead group and let the AI enclose it, but not capture it (which is perfectly correct behaviour). The AI them mistakenly considers the enclosing group unconditionally alive, even though for that it would have to actually capture the dead group. It then lets the enclosing group be captured. The loss involves a misread capture race; the particular blunder move is clear even to a Go novice.

Importantly, though the strategy was discovered through training an adversarial AI, it can be played by an unaided human. Also importantly, this is not a trivially fixable bug. The main author of KataGo confirms this here, pointing out also that KataGo, a commonly-used superhuman Go AI, sometimes misreads specific positions that arise within regular gameplay when they get out of the space explored in self-play. This means that superhuman Go AIs require various ad hoc additions to their training. The original paper discvered two exploits, one that relied on a particular ruleset and the more legitimate one described above, but it's possible that there are yet more exploits that would be found next if this got patched.

Since there aren't big human-versus-AI events in Go these days, it's hard to evaluate how (un)fixable this problem is, but professional Go players take it seriously. Here is a dan-level professional playing the strategy out and reflecting on what it means for the game as a piece of culture.

As an aside, I find this fascinating; it's what got me into Go, through this essay (which I also recommend, even if you don't know Go at all). I learned the game to understand what this kind of exploit means, and I think I've figured it out. It shows one component of human intelligence that we have not replicated so far in AI: humans have a kind of "constant vigilance". A human encountering this strategy would think "this is weird, my enemy is up to something". They would spot where the trap seemed to lie, and play the obvious safeguarding move to block it, even if they never saw anything like it before. I have no idea how humans do this. I guess it shows that intelligence is much more of an ill-defined, ad-hoc, improvisational kind of thing than generally assumed.

Kotlopou
  • 101
  • 3