[ CogSci Summaries home | UP | email ]

J. Laird, A. Newell, and P. Rosenbloom, SOAR: An Architecture for General Intelligence. Artificial Intelligence, 33, 1987.
@Article{laird1987,
  author =       "J. E. Laird and A. Newell and P. S. Rosenbloom",
  year =         "1987",
  title =        "{SOAR}: an architecture for general intelligence",
  journal =      "Artificial Intelligence",
  volume =       "33",
  number =       "1",
  month =        sep,
  pages =        "1--64",
  note =         "\iindex{Laird, J.}\iindex{Newell,
                 A.}\iindex{Rosenbloom, P.}\iindex{Newell, A.}",
}

Author of the summary: J. William Murdock, 1997, murdock@cc.gatech.edu, Jim R. Davies, 2000, jim@jimdavies.org

Cite this paper for:

Keywords: Production, Goal, Chunking, Knowledge Compilation

Summary: Introduces the concept of a general architecture for reasoning and presents SOAR as one such architecture. Discusses the range of tasks which SOAR has addressed. Presents the core commitments of the SOAR philosophy: search through states in a problem space, universal subgoaling, productions as the only long term memory elements, the use of explicit preferences to guide processing, the definition of subgoals as responses to automatically detected impasses, continuous monitoring of goal termination, emergence of standard weak methods, and learning as chunking of subgoal resolutions. Describes the architecture using the eight-puzzle as the problem space. Details each of the major components: working (declarative) memory (including the context stack and preferences as well as normal objects), the processing structure, the subgoaling mechanism, the built in default knowledge (i.e. weak method strategies), and the chunking mechanism. Discusses general issues: scale, weak methods, and learning. Concludes by enumerating and discussing the fundamental hypotheses of SOAR.

[p2] Soar ancestry: logic theorist, GPS, general theory of human problem solving, production systems, PSG, PSANLS, OPS.

roots: cognitive architecture concept, instructable production system, problem spaces.

drawbacks: Soar cannot do the following [p4]

Uses the problem space hypothesis: The problem space (operators that can change the current state to yield a new state) is the fundamental organization of all goal-directed activity.

Goals can be represented with a test procedure or with a goal state. [p5]

Lack of required knowledge leads to subgoaling.[p7] universal subgoaling: creating a subgoal at any sign of difficulty[p7]

All long term knowledge is represented with a production system. All productions are fired in parallel. There is no conflict resolution. Productions add and remove knowledge from working memory (WM).

Preferences favor some productions over others. A decision proceedure uses these preferences to find the productions that fire.

[p10] Soar can implement weak methods with different productions.

One way in which Soar learns: automatically and permanently caching the results of subgoals as productions. e.g. when deciding between two actions, a subgoal is created, and one is chosen. The next time that happens, productions with preferences are fired, avoiding the impasse the next time.

There is a context stack [p13] that doubles as a goal stack. Each level has a context and one goal.

Working memory can be modified thus: [p17]

The decision cycle: [p29] There are 4 kinds of impasses: "Impasses are resolved by the addition of preferences that change the results of the decision procedure." [p30] It makes a new context and subgoal to do this.

Soar has default, domain-independent knowledge that allows it use the subgoaling appropriately. This consists of 52 productions, categorized into these groups:

Chunking is substituting efficient productions for complex goal processing. [p36] It learns to anticipate and avoid impasses that lead to subgoals.

conclusions: (all quoted form p58)

  1. physical symbol system hypothesis: A general intelligence must be realized with a symbol system.
  2. goal structure hypothesis: control in a general intelligence is maintained by a symbolic goal system.
  3. uniform elementary-representation hypothesis: There is a single elementary representation for declarative knowledge.
  4. problem space hypothesis: problem spaces are the fundamental organizational unit of all goal-directed behavior.
  5. production system hypothesis: Production systems are the appropriate organization for encoding all long-term knowledge.
  6. universal-subgoaling hypothesis: Any decision can be an object of goal-oriented attention.
  7. automatic-subgoaling hypothesis: All goals arise dynamically in response to impasses and are generated automatically by the architecture.
  8. control-knowledge hypothesis: Any decision can be controlled by indefinite amounts of knowledge, both domain dependent and independent.
  9. weak-method hypothesis: the weak methods form the basic methods of intelligence.
  10. weak-method emergence hypothesis: the weak methods arise directly from the system responding based on its knowledge of the task.
  11. uniform learning hypothesis: goal-based chunking is the general learning mechanism

Summary author's notes:


Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies ( jim@jimdavies.org )
Last modified: Fri Mar 17 09:16:12 EST 2000