[ CogSci Summaries home | UP | email ]

Holyoak, K. J., & Thagard, P. (1989). A computational model of analogical problem solving. In S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 242-266). Cambridge: Cambridge University Press.

Author of the summary: Jim R. Davies, 2000, jim@jimdavies.org

Cite this paper for:

[p242] There are 3 parts to analogy: retrieval, adaptation, and the generation of new information (schema formation).

[p244] Duncker 1945: People search from the goal or from the solution.
The PI system (process of induction) does both using condition/action rules.

[p245] It uses spreading activation for retrieval.It is spread as a side effect of rule activity (unlike ACT*). I.e. when a rule fires, it activates the concepts in the rule. Activated concepts activate new rules.

[p249] Rules can fire in parallel, but this is limited by which are activated.

Ontology of PI: [p252] (also abstract schemata and analogies; see control)

Causality is represented in the rules. [p254] analogy does not happen if rules can solve the problem. Analogy happens only is attempted if a potential source concept is sufficiently activated.

[p255] a list of "effectors" are kept with the problem solution, telling what was done to solve it.

PI solves the symbol mismatch problem by having spreading activation between similar concepts like "destroy" and "overcome."

[p256] PI only generates a schema when the new target problem is solved by analogy. In the tumor/fortress problem, the schema looks like "if you want x to overcome z, and some y is between the two, then you might try splitting x."


  1. try to solve fortress by direct attack
  2. inference rule shows army destroyed
  3. punish rule for direct attack
  4. try splitting army
  5. the problem "capture fortress" is associated with the concepts that solve the problem under slot "problem solutions."
  6. Bi-directional attempt to solve tumor prob
  7. try to solve with rules.
  8. direct attack inferrs as unsuccessful; punish.
  9. the rule firing activates concepts that are attatched to "capture fortress" e.g. "between"
  10. detects relevance because of an association between "destroy" and "capture" ("destroying can be brought about my overcoming, which can be brought about by seizing, which can be brought about my capturing." [p254] using rules that produce a chain of subgoals.)
  11. mapping based on what caused the activation in the first place. between :: between, capture :: destroy, etc.
  12. mapping continues based on the constraints of the initial, activation-based mapping.
    Analogy found between CAPTURE FORTRESS and DESTROY TUMOR
      analogous concepts: (capture destroy) (between between)
      analogous objects:  (obj1 obj4) (obj2 obj5)
      source effectors:   (split (obj1) true)
                          (move-serarately-to (obj1 obj2) true)
      new subgoals:       (split (obj4) true)
                          (move-serarately-to (obj4 obj5) true)
  13. Perform analogous actions on target problem.
  14. If the analogy leads to the solution, create an abstract problem schema: "using a force to overcome a target" [p256] This is done by seeing if the rules have a higher order concept in common. There is evidence (Gick & Holyoak 1983) that people do this.
      data type:                  problem
      start:                      (between ($y $x $z) true)
      goals:                      (overcome ($x $z) true)
      effectors used in solution: (split ($x) true effect) 

Summary author's notes:

Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
JimDavies (jim@jimdavies.org)
Last modified: Thu Apr 15 11:07:19 EDT 1999