The expectiminimax algorithm is a variation of the minimax algorithm, for use in artificial intelligence systems that play two-player zero-sum games, such as backgammon, in which the outcome depends on a combination of the player's skill and chance elements such as dice rolls. In addition to "min" and "max" nodes of the traditional minimax tree, this variant has "chance" ("move by nature") nodes, which take the expected value of a random event occurring.[1] In game theory terms, an expectiminimax tree is the game tree of an extensive-form game of perfect, but incomplete information.
Class | Search algorithm |
---|---|
Worst-case performance | , where is the number of distinct dice throws |
Best-case performance | , in case all dice throws are known in advance |
In the traditional minimax method, the levels of the tree alternate from max to min until the depth limit of the tree has been reached. In an expectiminimax tree, the "chance" nodes are interleaved with the max and min nodes. Instead of taking the max or min of the utility values of their children, chance nodes take a weighted average, with the weight being the probability that child is reached.[1]
The interleaving depends on the game. Each "turn" of the game is evaluated as a "max" node (representing the AI player's turn), a "min" node (representing a potentially-optimal opponent's turn), or a "chance" node (representing a random effect or player).[1]
For example, consider a game in which each round consists of a single die throw, and then decisions made by first the AI player, and then another intelligent opponent. The order of nodes in this game would alternate between "chance", "max" and then "min".[1]
Pseudocode
editThe expectiminimax algorithm is a variant of the minimax algorithm and was firstly proposed by Donald Michie in 1966.[2] Its pseudocode is given below.
function expectiminimax(node, depth) if node is a terminal node or depth = 0 return the heuristic value of node if the adversary is to play at node // Return value of minimum-valued child node let α := +∞ foreach child of node α := min(α, expectiminimax(child, depth-1)) else if we are to play at node // Return value of maximum-valued child node let α := -∞ foreach child of node α := max(α, expectiminimax(child, depth-1)) else if random event at node // Return weighted average of all child nodes' values let α := 0 foreach child of node α := α + (Probability[child] × expectiminimax(child, depth-1)) return α
Note that for random nodes, there must be a known probability of reaching each child. (For most games of chance, child nodes will be equally-weighted, which means the return value can simply be the average of all child values.)
Expectimax search
editExpectimax search is a variant described in Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability (2005) by Tom Everitt and Marcus Hutter.
Alpha-beta pruning
editBruce Ballard was the first to develop a technique, called *-minimax, that enables alpha-beta pruning in expectiminimax trees.[3][4] The problem with integrating alpha-beta pruning into the expectiminimax algorithm is that the scores of a chance node's children may exceed the alpha or beta bound of its parent, even if the weighted value of each child does not. However, it is possible to bound the scores of a chance node's children, and therefore bound the score of the CHANCE node.
If a standard iterative search is about to score the th child of a chance node with equally likely children, that search has computed scores for child nodes 1 through . Assuming a lowest possible score and a highest possible score for each unsearched child, the bounds of the chance node's score is as follows:
If an alpha and/or beta bound are given in scoring the chance node, these bounds can be used to cut off the search of the th child. The above equations can be rearranged to find a new alpha & beta value that will cut off the search if it would cause the chance node to exceed its own alpha and beta bounds:
The pseudocode for extending expectiminimax with fail-hard alpha-beta pruning in this manner is as follows:
function *-minimax(node, depth, α, β) if node is a terminal node or depth = 0 return the heuristic value of node if node is a max or min node return the minimax value of the node let N = numSuccessors(node) // Compute α, β for children let A = N * (α - U) + U let B = N * (β - L) + L let sum = 0 foreach child of node // Limit child α, β to a valid range let AX = max(A, L) let BX = min(B, U) // Search the child with new cutoff values let score = *-minimax(child, depth - 1, AX, BX) // Check for α, β cutoff conditions if score <= A return α if score >= B return β sum += score // Adjust α, β for the next child A += U - v B += L - v // No cutoff occurred, return score return sum / N
This technique is one of a family of variants of algorithms which can bound the search of a CHANCE node and its children based on collecting lower and upper bounds of the children during search. Other techniques which can offer performance benefits include probing each child with a heuristic to establish a min or max before performing a full search on each child, etc.
See also
editReferences
edit- ^ a b c d Russell, Stuart Jonathan; Norvig, Peter; Davis, Ernest (2010). Artificial Intelligence: A Modern Approach. Prentice Hall. pp. 177–178. ISBN 978-0-13-604259-4.
- ^ Michie, D. (1966). "Game-Playing and Game-Learning Automata". Advances in Programming and Non-Numerical Computation. pp. 183–200. doi:10.1016/B978-0-08-011356-2.50011-2. ISBN 978-0-08-011356-2.
- ^ Ballard, Bruce W. (September 1983). "The *-minimax search procedure for trees containing chance nodes". Artificial Intelligence. 21 (3): 327–350. doi:10.1016/S0004-3702(83)80015-0.
- ^ Hauk, Thomas; Buro, Michael; Schaeffer, Jonathan (2006). "Rediscovering *-Minimax Search". Computers and Games. Lecture Notes in Computer Science. Vol. 3846. pp. 35–50. doi:10.1007/11674399_3. ISBN 978-3-540-32488-1.