[ CogSci Summaries home | UP | email ]

Nau, D.S., Lustrek, M., Parker, A., Bratko, I. & Gams, M. (2010). When is it better not to look ahead? Artificial Intelligence, 173, 1323-1338.

  author = 	 {Dana S. Nau and Mitja Lustrek and Austin Parker and Ivan Bratko and Matjaz Gams},
  title = 	 {When is it better not to look ahead},
  journal = 	 {Artificial Intelligence},
  year = 	 {2010},
  volume = 	 {174},
  pages = 	 {1323--1338}

Author of the summary: Michael Bennett, 2012

Cite this paper for:

It"s hard to define when a depth first search works best compared to a breadth first search, and at what limit is the depth first search most effective. There are vague outlines and suggestions as "the choice of algorithm gets complex! Depth first may be faster if there are many paths that lead to the solution states, but all are quite long. Breadth first may be faster if there is one short path to the target, but within a large and deep search space" (Cawsey 1998).

What is needed is a more structured outline for situations where a depth first search performs best, and what an appropriate depth limit is in these situations. There have been cases showing that depth first searches can become counter productive or have a higher chance of misinterpreting losing trees for winning trees, this is known as lookahead pathology. Lookahead pathology may also occur in many decision making processes that are not typically associated with lookahead pathology, such as one-player games. Creating more efficient searches will require awareness of situations where this lookahead pathology maybe occurring, and discussing ways to reduce and eliminate it.

First minimax algorithm must be defined in order to continue, as it is the decision making process for games, usually.

MiniMax algorithm:
The algorithm used in 2 player turn-based games. Max is one player, and Min is the other player. Max is attempting to get the highest score possible given the values used for the nodes; min is attempting to get the lowest score possible given the values used for the nodes.

The effects of graph structure, reliability of node evaluations, and evaluations deeper in the tree, are seen as sub-factors of the three main factors discussed below.

Three main contributing factors to lookahead pathology:

Branching factor: The number of successor nodes at each node in the tree.

The branching factor is a clear factor within lookahead pathology. As, the number of successor nodes increases, the number of choices effectively increases exponentially. Also, the minimax algorithm reduces the number of extremely high and extremely low valued nodes, leaving a large amount of remaining nodes with little difference in value. What is left for the search to choose from is a large number of nodes without a clearly favored node. Thus, the search space may continue down a seemingly slightly better path which ends up with more losing terminal nodes, when a slightly worse path may end up having more winning nodes.

Granularity of the heuristic function: The number of different values an evaluation function can return or a node can possess. The higher the granularity the larger amount of different values that can be returned or possessed; the lower amount of granularity the less amount of different values available.

Decreasing the granularity of the heuristic function is essentially decreasing the ability of the search to be able to choose the right path. This causes a lack of distinguishability between similar routes in the search, and the inability to reliably realize better successors nodes of the current node. This is an obvious problem for depth first searches. If one increases the amount of different values a heuristic function can return, the better a search can discriminate between nodes.

Local similarity: The similarity among the values of nearby nodes in the tree. The higher the amount of local similarity the better the chances that the successor node"s path will have a similar score; the lower the amount of local similarity the more independent the score is relative to any ancestor node.

Increasing local similarity largely influences the performance of a depth first search and a deeper search, for intuitive reasons. Overall, it makes the tree much more simpler for the search, especially a depth first search. If the search knows that the children are more likely to produce better scores from an already favored node, it becomes clear which path to choose, and it will more likely encounter better or equal nodes. When the scores become more independent then the search may reach a level where favored nodes have only led to a path which terminates in unfavored nodes.

Other influences on search:

Graph structure can be generalized to being an issue of local similarity, as graphs can be unfolded into trees whose terminal nodes possess different values (corresponding to single terminal nodes of graphs, yet unfolded into a tree). Thus, the values are independent of the preceding nodes.

Reliability of node evaluations are mapped onto the problem of local similarity as well. The more homogenous the tree is (e.g. trees which only possess preferred values) and the early the termination of the tree occurs (e.g. early check mate positions) the more reliable the evaluation will be. Yet, nodes with independent values and non homogenous trees have trouble with reliably valuing nodes.

Evaluations deeper in the tree can be mapped to granularity. Introducing evaluation functions that are able to change the value of the node based on how deep in the game the search is, and the general probability of mistaking a losing tree for a winning tree greatly increase the different amount of values a search can choose from (Schecher & Kaindl 1998). Functions like these eliminate lookahead pathology.

Where depth first searches and deeper searches succeed:

These factors were tested with two different chess endgames, Kalah, and the 8-puzzle game. All games exhibited positions where lookahead pathology occurred, even the one-player game 8-puzzle.
It was concluded by the study that depth first searches, and deeper search able to succeed in tree which have high amount of returnable values (granularity), low branching factor, and higher local similarity.


Cawsey, A. (1998). The essence of artificial intelligence. England: Prentice Hall Europe.

Summary author's notes:

Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)