The actual paper in ps form: dean1996.ps. Here is the homepage of T. Dean.

- exogenous events: events outside control of the planner.
- If a plan is dependent on observations at a current state, then that plan is conditional.(p7)
- If it is dependent on time, then the plan is time variant. else it is stationary.(p7)
- planning: a process that involves reasoning about the consequences of acting in order to choose from among a set of possible courses of actions.
- progression: the operation of determining the resulting state of a dynamical system given some initial state and some specified action.
- regression: transforming a goal state into a previous state and an action that will get you to the goal state.
- refinement: taking an existing plan and adding detail.
- repair: modifying an existing plan for better performance
- "The literature on planning and scheduling in artificial intelligence generally takes it on faith that any interesting problem is at least NP-hard." (p11)
- Simulated Annealing: Doing some random thing other than what the heuristic suggests. The purpose is to get an optimizing algorithm out of a rut. (p28)
- To evaluate plans stochastic in stochastic domains, you take a random sample of possible plans. The average of many of these samples gives a fairly accurate estimate of the plan quality.(p31)
- State Space search: searching by linking states
- Plan Space Search: searching by altering list of operators (allows insertions, etc.)

An algorithmic planning system has three parts:

- operators
- predictive model
- evaluator

**exogenous events:** events outside control of the planner.

**state variables:** used to describe the state of a system.

**state transition function:** takes the state and an action and
returns the resulting state.

The planner can assume that the plan executor can observe the
environment it is in.

If the state transitions involve some randomness, then there are state transition and output conditional probability distributions. This is computationally expensive in theory, but in practice it in not so bad: not all state variables are important for all transitions, and some state variables are independent. (p5)

If a plan is dependent on observations at a current state, then that
plan is **conditional**, and work in a closed loop..
If it is dependent on time, then the plan is time variant, else
it is stationary. (p7)
Any plan that igores the results of past actions in open loop.

Plans describe state histories. In conditional plans there is a probability distribution over them. Each history gets a real value. You can calculate the expected value of a plan given all of its histories, their probabilities and values, and the cost of getting from one state to another. ?? (p8)

refinement: taking an existing plan and adding detail.

repair: modifying an existing plan for better performance

The paper defines two operators over state-variable assignments: they look like circles around a + and -, respectively. In this outline I will use + and - to refer to them.

y - v means the set of assignments in y that are not also in v.

y + v returns the union of consistent assignments.

Actions are defined in terms of assignments. thus state transitions
are defined as:

f(s,a) = s (if s and the precondition of a are inconsistent, else) post(a) + (s - post(a))a state s satisfies a goal if the state's assignments are a superset of the goal's assignments. You can make partial plans that do a subset of the work, and you can have constraints that apply to that partial plan. You can work forward through the plan (progression) or backwards from the goal (regression). (p17) State different heuristics choose a path based on the difference between the result state and the goal.

**means-ends analysis**

If the preconditions of a good operator are not met, then you can make
those preconditions a subgoal (means ends analysis). (p18)

One problem with this is that premature committment to order of steps can lead to extensive backtracking.

**plan space refinement**

Satisfy one variable requirement at a time. (p19) A potential problem
is that in satisfying one you may undo progress already made. This can
be helped by protecting conditions already satisfied. (p20)

**task reduction**

Break large tasks into subtasks. This helps because you might know
that subtask x is unattainable, and thus not waste time working on the
details of subtask y. Also breaking the long task into a series of
independent shorter tasks reduces the complexity of the search. (p21)
Sometimes you can gloss over details of a plan, make a simple plan
test and debug it.

**constraint satisfaction in scheduling**

A scheduling solution can be described as a set of states.(p26) It is
often true that order of variables considered (for example start with
the variables with the fewest values) makes a big difference in
efficiency. One way to solve these problems is to start with an
inadequate assignment and repair.

Lin and Kernighan's algorithm uses a local repair method. It works like this: start with a configuration like abcde. Then, for each four-tuple, like abcd, see if it is faster to go acbd. This method actually gets to within 10% of optimality for many practical problems. (p28)

One problem with local repair methods is that they can get to local optima based perhaps on maximizing one of the performance variables. Such solutions are often bad. One way to combat this is with simulated annealing, which is to take a random repair. Doing this might get the algorithm out of a rut.

In case based planning, the planner saves previously made plans and their explanations so that later it can use parts of them when it encounters similar problems. (p29) By analyzing past experience the planner can learn new search control rules.

To evaluate plans stochastic in stochastic domains, you take a random sample of possible plans. The average of many of these samples gives a fairly accurate estimate of the plan quality. (p31)

- none

Back to the Cognitive Science Summaries homepage

Cognitive Science Summaries Webmaster: JimDavies ( jim@jimdavies.org ) Last modified: Mon Apr 5 14:02:29 EDT 1999