2

I had an idea to use the this algorithm to simulate real life game scenarios where a player could train and retrain their mistake and be ready for when that situation happens again in real life. This simulation would calculate every possible scenario and give you the best one. This would work almost the same way as it is used on Stockfish for chess. It calculates every possible scenario and always makes the right move.

Now my question is, would that be possible? Is there another algorithm/AI/Software that could make this simulation work?

nbro
  • 42,615
  • 12
  • 119
  • 217

2 Answers2

3

It also depends on the type of game.

The problem with Go is that a board that looks really good can turn into a disaster on the next move.

Games that have easy evaluations and experience only small incremental changes with each move are much better suited for alpha-beta pruning.

For instance, Checkers could be trivially evaluated as the difference in number of pieces each player has. Most moves will change the value by 0 or 1 points, with double jumps being much less common, and higher numbers being rare. Not going just one level deeper is usually not likely to be a significant problem.

Ray Butterworth
  • 238
  • 1
  • 2
  • 14
0

You can do that, however you're probably going to hit the wall that alpha-beta pruning got with Go and large scale games... it does not scale too well

Depending on what's your definition of when that situation happens again in real life you might need very large and deep trees to get a good estimate, let alone that to work well you have to have a good bounding algorithm, which is not trivial to obtain

As Neil, you probably also need to develop a very sophisticated and complex model to simulate your scenario

Alberto
  • 2,863
  • 5
  • 12