There is a lot of introductory literature about artificial intelligence for time and space discrete games, like chess. A few fundamental methods are offered:
Minimax to depth $n$ with an evaluation function will pick a move that reliably leads to the outcome best according to that evaluation function after $n$ further moves. There is a whole industry of building evaluation functions, but the principle is simple.
A monolithic policy function simply weighs all possible moves and picks the heaviest. It will commonly be a fancy neural network. There is a whole industry of building policy functions, but the principle is simple.
Weighted random search learns by playing randomly many times and giving more weight to those moves which lead to victory more often. There is a whole industry of building the distribution of random moves, but the principle is simple.
None of these methods apply to games played in continuous space (or a finely enough discrete space) or continuous time (or a discrete time where interesting events are sparse). An example of such a game is StarCraft.
There are two methods a computer plays StarCraft:
A bespoke handcrafted algorithm with many moving parts, such as build order, path finding, micro control. This is expensive because you need a lot of human effort and expertise.
A neural network that takes screen pictures to mouse clicks. This is expensive because you need a lot of hardware and electricity.
So, anyone can write an entertaining, if not dominating, artificial intelligence for a discrete game, but only very few can afford to write an artificial intelligence for a continuous game that is any better than inept. This is a sad conclusion.
Is there anything I am missing? Are there any helpful methods out there that I have not mentioned? Is there a way to adapt discrete methods to continuous games?
It seems that the difference in principle is that, in continuous games, we need to create a new move from scratch every time instead of picking out of a small set. If we simply create random moves, many of them will be inadmissible — trying to build stuff that cannot be afforded, sending units to unreachable places. Of those which are admissible, the majority will be meaningless random walking of units. The subset of moves that have any measurable meaning is tiny, and even the best single move has only tiny effect on the outcome. If we could make the computer generate a small amount of plausible moves, then we can apply discrete methods. But how do we teach the computer to generate plausible moves?
A case in point is the navigation mesh technique that sidesteps the problem of smooth path finding by finding paths in a discrete approximation of the given smooth space. Is this the best we can do?