[ CogSci Summaries home | UP | email ]

R. Dechter (1996), Bucket Elimination: A Unifying Framework for Probabilistic Inference, in Proceedings of Uncertainty in AI, Portland, Oregon.

  author =       "Rina Dechter",
  title =        "Bucket elimination: {A} unifying framework for
                 probabilistic inference",
  pages =        "211--219",
  ISBN =         "1-55860-412-X",
  editor =       "Eric Horvitz and Finn Jensen",
  booktitle =    "Proceedings of the 12th Conference on Uncertainty in
                 Artificial Intelligence ({UAI}-96)",
  month =        aug # "~1--4",
  publisher =    "Morgan Kaufmann Publishers",
  address =      "San Francisco",
  year =         "1996",

Author of the summary: Patrawadee Prasangsit, 1999, pp@cc.gatech.edu

The actual paper is online.  Also its extended version.

Cite this paper for:


This paper presents four probabilistic inference algorithms reformulated within bucket elimination framework.  It also presents a general way of combining conditioning and elimination within this framework.

Bucket Elimination is a unifying algorithmic framework that generalizes dynamic programming to accommodate algorithms for many complex problem solving and reasoning activities, including directional resolution for propositional satisfiability (Davis and Putnam, 1960), adaptive consistency for constraint satisfaction (Dechter and Pearl, 1987), Fourier and Gaussian elimination for linear equalities and inequalities, and dynamic programming for combinatorial optimization (Bertele and Brioschi, 1972).

Directional resolution (DR)

Is the core of the well-known Davis-Putnam algorithm for satisfiability.  It systematically eliminates literals from the set of clauses that are candidates for future resolution.

The algorithm, outlined in figure 1 (page 2), is described using buckets partitioning the set of clauses in theory Phi.  We call its output theory, Ed(Phi), the directional extension of Phi.   Given an ordering d = Q1, ... , Qn, all the clauses containing Qi that do not contain any symbol higher in the ordering are placed in the bucket of Qi, denoted bucketi.  The algorithm processes the buckets in a reverse order of d.  When processing bucketi, it resolves over Qi all possible pairs of clauses in the bucket and inserts the resolvents into the appropriate lower buckets.

It was shown that if the algorithm returns a non-empty clause, then the theory is satisfiable and a satisfying truth assignment can be generated in time linear in the size of the resulting theory.

Theorem 1.1 (model generation)

Let Phi be a CNF formula, d = Q1, ... , Qn an ordering, and Ed(Phi) its directional extension.  Then, if the extension is not empty, any model of Phi can be generated in a backtrack-free manner, consulting Ed(Phi) in the order d as follows: assign to Q1 a truth value that is consistent with clauses in bucket1 (if the bucket is empty, assign Q1 an arbitrary value); after assigning values to Q1, ... , Qi-1, assign a value to Qi so that together with the previous assignments it will satisfy all clauses in bucketi.


The collection of the following four belief network algorithms have a lot in common with the resolution procedure above.  They all possess similar properties of compiling a theory into a backtrack-free (e.g., greedy) theory and their complexity is dependent on the same graph parameters.

Given an evidence set e, an instantiated subset of variables,

The paper goes on to explain how these tasks can be reformulated within bucket-elimination framework.  Pseudocodes are also given in figure 3 through 6 (or section 3 through 6 in the extended version of paper).

The serious concern of elimination algorithms is efficiency; they normally take exponential time and space.  Considerable memory is required to record intermediate functions.  Conditioning is an important method for reducing the space complexity.  It can be incorporated on top of elimination.

Conditioning is a generic name for algorithms that search the space of partial value assignments, or partial conditionings.  Conditioning means splitting a problem into subproblems based on a certain condition.  The complexity of conditioning algorithms is exponential in the conditioning set, however, their space complexity is only linear.

Summary author's notes

Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies ( jim@jimdavies.org )

Last modified: June 23, 1999