CogSci Summaries home |
Holyoak, K. J., & Thagard, P. (1989). A computational model of
problem solving. In S. Vosniadou & A. Ortony (Eds.), Similarity and
analogical reasoning (pp. 242-266). Cambridge: Cambridge University Press.
Author of the summary: Jim R. Davies, 2000, firstname.lastname@example.org
Cite this paper for:
There are 3 parts to analogy: retrieval, adaptation, and the
generation of new information (schema formation).
- SYSTEM:PI (process of induction)
- since analogical processing is more expensive, it is expected
to be a backup measure when you don't have rules to fire. In
consequence, we would expect novices in a domain to make more use of
Duncker 1945: People search from the goal or from the solution.
The PI system (process of induction) does both using condition/action
It uses spreading activation for retrieval.It is spread as a side
effect of rule activity (unlike ACT*). I.e. when a rule fires, it
activates the concepts in the rule. Activated concepts activate new
[p249] Rules can fire in parallel, but this is limited by which are
Ontology of PI: [p252] (also abstract schemata and analogies; see
Causality is represented in the rules. [p254] analogy does not
happen if rules can solve the problem. Analogy happens only is
attempted if a potential source concept is sufficiently activated.
E.g. (army (obj1) true) means obj1 is an army. Messages can be
[true, projected to be true, false, projected to be false].
Concepts are represented as frames such as:
data type: concept
instances: (army (obj1) true)
rules: R1, R2, R3
Rules look like this:
data type: rule
attatched concepts: army
conditions: (army ($x) true)
(between ($y $x $z) true)
(road ($y) true)
actions: (move-to ($x $y) true effect)
A sample problem looks like this:
data type: problem
start: (radiation (obj4) true)
(tumor (obj5) true)
(flesh (obj6) true)
(patient (obj7) true)
(alive (obj7) true)
(between (obj6 obj4 obj5) true)
(inside (obj5 obj7) true)
goals: (alive (obj7) true)
(destroy (obj4 obj5) true)
a list of "effectors" are kept with the problem solution, telling what
was done to solve it.
PI solves the symbol mismatch problem by having spreading activation
between similar concepts like "destroy" and "overcome."
PI only generates a schema when the new target problem is solved by
analogy. In the tumor/fortress problem, the schema looks like "if you
want x to overcome z, and some y is between the two, then you might
try splitting x."
- try to solve fortress by direct attack
- inference rule shows army destroyed
- punish rule for direct attack
- try splitting army
- the problem "capture fortress" is associated with the concepts
that solve the problem under slot "problem solutions."
- Bi-directional attempt to solve tumor prob
- try to solve with rules.
- direct attack inferrs as unsuccessful; punish.
- the rule firing activates concepts that are attatched to "capture
fortress" e.g. "between"
- detects relevance because of an association between "destroy" and
"capture" ("destroying can be brought about my overcoming, which can be
brought about by seizing, which can be brought about my capturing."
[p254] using rules that produce a chain of subgoals.)
- mapping based on what caused the activation in the first
place. between :: between, capture :: destroy, etc.
- mapping continues based on the constraints of the initial,
Analogy found between CAPTURE FORTRESS and DESTROY TUMOR
analogous concepts: (capture destroy) (between between)
analogous objects: (obj1 obj4) (obj2 obj5)
source effectors: (split (obj1) true)
(move-serarately-to (obj1 obj2) true)
new subgoals: (split (obj4) true)
(move-serarately-to (obj4 obj5) true)
- Perform analogous actions on target problem.
- If the analogy leads to the solution, create an abstract problem
schema: "using a force to overcome a target" [p256] This is done by
seeing if the rules have a higher order concept in common. There is
evidence (Gick & Holyoak 1983) that people do this.
CAPTURE FORTRESS/DESTROY TUMOR
data type: problem
start: (between ($y $x $z) true)
goals: (overcome ($x $z) true)
effectors used in solution: (split ($x) true effect)
Summary author's notes:
- Differences between this and Galatea:
- bi-directional search
- inference from rules
- spreading activation
- has both analogy and rule inference
- frames that contain declarative knowledge (called
messages) and organizes rules. [p249]
- production rules that can fire in parallel, but only
those attatched to active concepts are candidates.
- does retrieval
- Similarities between this and Galatea:
- abstract schema :: visual representation
Back to the Cognitive Science Summaries homepage
Cognitive Science Summaries Webmaster:
Last modified: Thu Apr 15 11:07:19 EDT 1999